![]() METHOD OF DECODING A VIDEO
专利摘要:
video entropy decoding method, video entropy decoding apparatus, video entropy coding method, and video entropy coding apparatus. methods for encoding and decoding the entropy of a video are provided. the entropy decoding method includes obtaining a significant transformation unit coefficient indicator that indicates whether a non-zero transformation coefficient exists in the transformation unit, from a bit stream (bit stream), determination of a context model to arithmetically decode the transformation unit significant coefficient indicator, based on the transformation depth of the transformation unit and arithmetically decode the transformation unit significant coefficient indicator based on the given context model. 公开号:BR112014033040B1 申请号:R112014033040-9 申请日:2013-07-02 公开日:2020-09-29 发明作者:Il-koo KIM 申请人:Samsung Electronics Co., Ltd; IPC主号:
专利说明:
Technical Field [0001] One or more embodiments of the present invention relate to the encoding and decoding of video and, more particularly, to a method and apparatus for encoding and entropy encoding information related to a processing unit. Background of the Technique [0002] According to image compression methods, such as advanced video encoding (AVC) of MPEG-1, MPEG-2, MPEG-4 or MPEG-4 H.264 / MPEG-4, an image is divided in blocks having a predetermined dimension and then the residual data of the blocks are obtained by Interpredition, or intraprediction. Residual data is compressed by transformation, quantization, digitization, run length coding, and entropy coding. In entropy coding, a syntax element such as a transformation coefficient or a prediction mode is entropy coded to produce a bit stream. A decoder analyzes and extracts syntax elements from a bit stream, and reconstructs an image based on the extracted syntax elements. Disclosure of the Invention Technical problem [0003] One or more embodiments of the present invention include an entropy encoding method and apparatus, and an entropy decoding method and apparatus for selecting a context model used to entropy encode and decode a syntax element related to an transformation unit which is a data unit used to transform a coding unit, based on a depth of transformation, indicating a hierarchical division relationship between the coding unit and the transformation unit. Technical Solution [0004] A context model for arithmetically decoding an indicator of significant transformation unit coefficient is determined based on the depth of transformation, indicating the number of times the encoding unit is divided to determine the transformation unit included in the processing unit coding, and the significant transformation unit coefficient indicator is decrypted arithmetically based on the given context model. Advantageous Effects [0005] According to the modalities of the present invention, by selecting a context model based on a depth of transformation, a condition for selecting the context model can be simplified and the operation for entropy encoding and decoding can also be simplified. Brief Description of Drawings [0006] FIG. 1 is a block diagram of a video encoding apparatus according to an embodiment of the present invention. [0007] FIG. 2 is a block diagram of a video decoding apparatus according to an embodiment of the present invention. [0008] FIG. 3 is a diagram for describing a concept of coding units according to an embodiment of the present invention. [0009] FIG. 4 is a block diagram of a video encoder based on coding units having a hierarchical structure, according to an embodiment of the present invention. [00010] FIG. 5 is a block diagram of a video decoder based on coding units having a hierarchical structure, according to an embodiment of the present invention. [00011] FIG. 6 is a diagram illustrating the deepest coding units according to depths, and partitions, according to an embodiment of the present invention. [00012] FIG. 7 is a diagram for describing a relationship between a coding unit and transformation units, according to an embodiment of the present invention. [00013] FIG. 8 is a diagram describing coding information for coding units corresponding to a coded depth, according to an embodiment of the present invention. [00014] FIG. 9 is a diagram of deeper coding units according to depth, according to an embodiment of the present invention. [00015] FIGS. 10 to 12 are diagrams for describing a relationship between coding units, prediction units, and frequency transformation units, according to an embodiment of the present invention. [00016] FIG. 13 is a diagram to describe a relationship between a coding unit, a prediction unit, and a transformation unit, according to the coding mode information in Table 1. [00017] FIG. 14 is a block diagram of an entropy coding apparatus according to an embodiment of the present invention. [00018] FIG. 15 is a flow chart of an entropy encoding and decoding operation of a syntax element related to a transformation unit, according to an embodiment of the present invention. [00019] FIG. 16 is a diagram illustrating a coding unit and transformation units included in the coding unit, according to an embodiment of the present invention. [00020] FIG. 17 is a diagram illustrating a context increase parameter used to determine a context model of a significant transformation unit coefficient indicator for each of the transformation units of FIG. 16, based on a transformation depth. [00021] FIG. 18 is a diagram illustrating a coding unit and a transformation unit included in the coding unit, according to another embodiment of the present invention. [00022] FIG. 19 is a diagram illustrating split transformation indicators used to determine the structure of the transformation units included in the coding unit of FIG. 16, according to an embodiment of the present invention. [00023] FIG. 20 illustrates a transformation unit that is encoded by entropy according to an embodiment of the present invention; [00024] FIG. 21 illustrates a corresponding significance map for the transformation unit of FIG. 20; [00025] FIG. 22 illustrates the coeff_abs_level_greaterl_flag indicator corresponding to the 4 x 4 transformation unit of FIG. 20; [00026] FIG. 23 illustrates the coeff_abs_level_greater2_flag indicator corresponding to the 4 x 4 transformation unit of FIG. 20; [00027] FIG. 24 illustrates the coeff_abs_level_remaining indicator corresponding to the 4 x 4 transformation unit of FIG. 20; [00028] FIG. 25 is a flow chart of a video entropy encoding method, according to an embodiment of the present invention. [00029] FIG. 26 is a block diagram of an entropy decoding apparatus according to an embodiment of the present invention; [00030] FIG. 27 is a flow chart of a video entropy decoding method, according to an embodiment of the present invention. Best Mode for Carrying Out the Invention [00031] According to one or more embodiments of the present invention, a method of entropy decoding of a video is provided, the method includes determining a transformation unit included in a coding unit and used to inversely transform the coding unit; obtain a significant transformation unit coefficient indicator that indicates whether a non-zero transformation coefficient exists in the transformation unit, from a stream of bits; if the number of times the coding unit is divided in order to determine whether the transformation unit is referred to as a transformation depth of the transformation unit, determine a context model for arithmetically decoding the transformation unit significant coefficient indicator, based on the transformation depth of the processing unit; and arithmetically decoding the transformation unit's significant coefficient indicator based on the given context model. [00032] According to one or more embodiments of the present invention, a video entropy decoding apparatus, the apparatus includes an analyzer to obtain a significant transformation unit coefficient indicator that indicates whether a non-zero transformation coefficient it exists in a transformation unit included in a coding unit and used to inversely transform the coding unit, from a stream of bits; a context modeler for, if the number of times the coding unit is divided to determine whether the transformation unit is referred to as a transformation depth of the transformation unit, determine a context model for arithmetically decoding the significant unit coefficient indicator transformation, based on the transformation depth of the transformation unit; and an arithmetic decoder for arithmetically decoding the transformation unit's significant coefficient indicator based on the given context model. [00033] According to one or more embodiments of the present invention, an entropy encoding method of a video is provided, the method includes obtaining data from a transformed coding unit based on a transformation unit; if the number of times the coding unit is divided in order to determine whether the transformation unit is referred to as a transformation depth of the transformation unit, determine a context model to arithmetically encode a significant transformation unit coefficient indicator that indicates whether a non-zero transformation coefficient exists in the transformation unit, based on the transformation depth of the transformation unit; and to code the arithmetically significant indicator of the transformation unit arithmetically based on the given context model. [00034] According to one or more embodiments of the present invention, a video entropy coding apparatus is provided, the apparatus includes a context modeler for obtaining data from a transformed coding unit based on a processing unit. transformation and, if the number of times the coding unit is divided to determine whether the transformation unit is referred to as a transformation depth of the transformation unit, determine a context model to arithmetically encode a significant transformation unit coefficient indicator that indicates whether a non-zero transformation coefficient exists in the transformation unit, based on the transformation depth of the transformation unit; and an arithmetic encoder to arithmetically encode the transformation unit's significant coefficient indicator based on the given context model. Mode for the Invention [00035] Hereinafter, a method and apparatus for updating a parameter used in the entropy encoding and decoding of dimension information of a transformation unit according to an embodiment of the present invention will be described with reference to FIGS. 1 to 13. In addition, an entropy encoding and decoding method of a syntax element obtained using the entropy encoding and decoding method of a video described with reference to FIGS. 1 to 13 will be described in detail with reference to FIGS. 14 to 27. Expressions like "at least one of", when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. [00036] FIG. 1 is a block diagram of a video encoding apparatus 100 according to an embodiment of the present invention. [00037] The video encoding apparatus 100 includes a hierarchical encoder 110 and an entropy encoder 120. [00038] The hierarchical encoder 110 can divide a current frame to be encoded, in units of predetermined data units to carry out the encoding in each of the data units. In detail, hierarchical encoder 110 can divide a current frame, based on a larger encoding unit, which is a maximum dimension encoding unit. The larger coding unit according to an embodiment of the present invention can be a data unit having a dimension of 32 x 32, 64 x 64, 128 x 128, 256 x 256, etc., in which a form of unit of data is a square that has width and length in squares of 2 and is greater than 8. [00039] The coding unit according to an embodiment of the present invention can be characterized by a maximum dimension and depth. The depth denotes the number of times that the coding unit is spatially divided from the larger coding unit, and as the depth deepens, the deeper coding units according to the depths can be separated from the larger coding unit. for a smaller encoding unit. The depth of the larger coding unit is a higher depth and a depth of the smaller coding unit is a lower depth. Since the size of a coding unit corresponding to each depth decreases as the depth of the larger coding unit deepens, a coding unit corresponding to a higher depth may include a plurality of coding units corresponding to lower depths. [00040] As described above, the image data of the current structure is divided into larger encoding units according to a maximum encoding unit size, and each of the larger encoding units may include deeper encoding units that are divided according to the depth. Since the larger coding unit according to one embodiment of the present invention is divided according to the depth, the image data from a spatial domain included in the larger coding unit can be classified hierarchically according to the depths. [00041] The maximum depth and maximum dimension of a coding unit, which limits the total number of times that a height and width of the largest coding unit are hierarchically divided, can be predetermined. [00042] Hierarchical encoder 110 encodes at least one division region obtained by dividing a region of the largest encoding unit according to depth, and determines a depth to produce image data finally encoded according to at least one region separation. In other words, hierarchical encoder 110 determines an encoded depth by encoding image data in the deepest encoding units according to the depth, according to the largest encoding unit of the current structure, and selecting a depth that has the minimal coding error. The determined coded depth and the image data encoded according to maximum coding units are produced for the entropy encoder 120. [00043] The image data in the largest encoding unit is encoded based on the deepest encoding units corresponding to at least a depth equal to or less than the maximum depth, and the encoding results of the image data are compared based on each of the deepest coding units. A depth having the minimum coding error can be selected after comparing the coding errors of the deepest coding units. At least one coded depth can be selected for each larger coding unit. [00044] The size of the largest coding unit is divided as a coding unit is hierarchically divided according to depths and as the number of coding units increases. In addition, even if the coding units correspond to the same depth in a larger coding unit, it is determined whether to divide each of the coding units corresponding to the same depth to a lower depth by measuring a data coding error. each encoding unit separately. Thus, even when the image data is included in a larger coding unit, the image data is divided into regions according to depth, and coding errors can differ according to the regions of a larger coding unit and thus, the coded depths may differ according to the regions of the image data. Thus, one or more coded depths can be determined in a larger coding unit, and the image data from the larger coding unit can be divided according to the coding units of at least one coded depth. [00045] In this way, hierarchical encoder 110 can determine encoding units that have a tree structure included in the larger encoding unit. The 'coding units having a tree structure' according to an embodiment of the present invention, include the coding units corresponding to a depth determined to be the coded depth, among all the deepest coding units included in the larger coding unit . A coding unit having a coded depth can be determined hierarchically according to depths in the same region as the larger coding unit, and can be determined independently in different regions. Likewise, a depth encoded in a current region can be determined independently from a depth encoded in another region. [00046] A maximum depth according to an embodiment of the present invention is an index related to the number of times that a larger coding unit is divided into small coding units. A first maximum depth according to an embodiment of the present invention can denote the total number of times that the largest coding unit is divided into smaller coding units. A second maximum depth according to one embodiment of the present invention, can denote the total number of depth levels from the largest coding unit to the smallest coding unit. For example, when a depth of the largest coding unit is 0, a depth of a coding unit, in which the largest coding unit is divided once, can be set to 1, and a depth of a coding unit, in the which the major coding unit is divided by two times, can be set to 2. Here, if the minor coding unit is a coding unit where the major coding unit is divided four times, five depths of depths 0, 1, 2, 3, and 4 exist, and therefore the first maximum depth can be defined as 4, and the second maximum depth can be defined as 5. [00047] Prediction coding and transformation can be performed according to the largest coding unit. The prediction coding and transformation is also performed according to the deepest coding units according to a depth equal to or less than the maximum depth, according to the largest coding unit. [00048] Since the number of deepest coding units increases whenever the largest coding unit is divided according to depth, coding, including prediction coding and transformation is performed on all the most coding units generated as the depth increases. For convenience of description, prediction coding and transformation will now be described based on a coding unit of a current depth, in a larger coding unit. [00049] The video encoding apparatus 100 can select in various ways a dimension or format of a data unit to encode the image data. In order to encode the image data, operations, such as prediction coding, transformation, and entropy coding, are performed, and at this point, the same data unit can be used for all operations or different data units can be used. be used for each operation. [00050] For example, the video encoding apparatus 100 may select not only an encoding unit to encode the image data, but also a data unit different from the encoding unit in order to perform prediction encoding on the data the encoding unit. [00051] In order to perform the prediction coding in the larger coding unit, the prediction coding can be carried out based on a coding unit corresponding to a coded depth, that is, based on a coding unit that is not divided into coding units corresponding to a lower depth. Hereinafter, the coding unit that is no longer divided and becomes a base unit for prediction coding will be referred to as a 'prediction unit'. A partition obtained by dividing the prediction unit can include a prediction unit or a data unit obtained by separating at least one of a height and width from the prediction unit. [00052] For example, when a 2N x 2N coding unit (where N is a positive integer) is no longer divided and becomes a 2N * 2N prediction unit, a partition dimension can be 2N x 2N , 2NxN, N * 2N, or N * N. Examples of a partition type include symmetric partitions, which are obtained by symmetrically dividing a height or width of the prediction unit, partitions obtained by asymmetrically dividing the height or width of the unit of prediction, such as l: n or n: l, partitions that are obtained by geometrically dividing the prediction unit, and partitions having arbitrary shapes. [00053] A prediction mode of the prediction unit can be at least one in an intra mode, an inter mode, and a skip mode. For example, the intra or inter mode can be performed between the 2N x 2N, 2NxN, N x 2N, or N x N. partition. In addition, the skip mode can be performed only on the 2N x 2N partition. Coding is performed independently on a prediction unit in a coding unit, thus selecting a prediction mode with minimal coding error. [00054] The video coding apparatus 100 can also perform the transformation of the image data in a coding unit based not only on the coding unit to encode the image data, but also based on a data unit which is different from the encoding unit. [00055] In order to carry out the transformation in the coding unit, the transformation can be carried out on the basis of a data unit that has a dimension equal to or less than the dimension of the coding unit. For example, the data unit for the transformation can include an data unit in an intra mode and a data unit in an inter mode. [00056] A data unit used as a basis for transformation is referred to as a "transformation unit". Like the coding unit, the transformation unit in the coding unit can be recursively divided into regions of smaller dimensions, so that the transformation unit can be determined individually into units of regions. Thus, the residual data in the coding unit can be divided according to the transformation unit, with the structure of the tree according to the transformation depths. [00057] A transformation depth indicating the number of times the height and width of the coding unit are divided to reach the processing unit can also be fixed on the processing unit. For example, in a current coding unit of 2N x 2N, a transformation depth can be 0 when the size of a transformation unit is 2N x 2N, it can be 1 when the size of a transformation unit is NxN, and it can be be 2 when the size of a processing unit is N / 2 x N / 2. That is, the transformation unit having the tree structure can also be defined according to the transformation depth. [00058] The coding information according to the coding units corresponding to a coded depth requires not only information about the coded depth, but also about the information related to the prediction and transformation coding. In this way, hierarchical encoder 110 not only determines a coded depth that has the minimum coding error, but also determines a partition type in a prediction unit, a prediction mode according to the prediction units, and a dimension of a processing plant for transformation. [00059] The coding units according to a tree structure in a larger coding unit and a method of determining a partition, according to the modalities of the present invention, will be described in detail below with reference to FIGS. 3 to 12. [00060] The hierarchical encoder 110 can measure a coding error of the deepest coding units according to the depths using Rate-Distortion Optimization based on Lagrange multipliers. [00061] Entropy encoder 120 produces the image data of the larger encoding unit, which is encoded based on at least one encoded depth determined by hierarchical encoder 110, and information on the encoding mode according to the encoded depth. , in continuous data streams. The encoded image data can be a result of encoding residual image data. Information about the coding mode according to the coded depth can include information about the coded depth, information about the type of partition in the prediction unit, information about the prediction mode, dimension information of the transformation unit. In particular, as will be described below, entropy encoder 120 can entropy encode a significant transformation unit coefficient indicator (coded_block_flag) cbf indicating whether a transformation coefficient other than 0 is not included in a transformation unit, using a context model determined based on a transformation depth of the transformation unit. An operation of entropy coding syntax elements related to a transformation unit in the entropy coding unit 120 will be described below. [00062] Information about the encoded depth can be defined using the division information according to the depth, which indicates whether the encoding is performed in encoding units of a lesser depth, instead of a current depth. If the current depth of the current coding unit is the coded depth, the image data in the current coding unit is coded and produced, so the separation of information cannot be set to divide the current coding unit to a lesser depth. Alternatively, if the current depth of the current coding unit is not the coded depth, coding is carried out in the shortest coding unit, and therefore the division information can be set to divide the current coding unit to obtain the units coding of the smallest depth. [00063] If the current depth is not the encoded depth, the encoding is performed in the encoding unit which is divided into a smaller depth coding unit. Since at least one lesser depth coding unit exists in a current depth coding unit, coding is performed repeatedly on each shorter depth coding unit, and thus coding can be performed recursively for coding units that have the same depth. [00064] Since the coding units having a tree structure are determined for a larger coding unit, and the information about at least one coding mode is determined by a coding unit of a coded depth, the information about at least at least one encoding mode can be determined for a larger encoding unit. In addition, an encoded depth of the image data from the larger encoding unit may be different according to locations since the image data is hierarchically divided according to the depth and thus the information about the encoded depth and the encoding mode can be set for the image data. [00065] In this way, entropy encoder 120 can assign encoding information about a corresponding encoded depth and an encoding mode to at least one of the encoding unit, the prediction unit, and a minimum unit included in the greater coding. [00066] The minimum unit according to one embodiment of the present invention is a square-shaped data unit obtained, dividing the smallest coding unit that constitutes the lowest depth by 4. Alternatively, the minimum unit may be a unit maximum square data that can be included in all coding units, prediction units, partition units, and transformation units included in the larger coding unit. [00067] For example, the output of coding information through entropy encoder 120 can be classified into coding information according to the coding units and coding information according to the prediction units. The coding information according to the coding units can include information about the prediction mode and the size of the partitions. The coding information according to the prediction units can include information about an estimated direction in an inter mode, about a reference image index in an inter mode, about a motion vector, about a chroma component in a way intra, and about an interpolation method of the intra mode. In addition, information about a maximum size of the coding unit defined according to the frames, slices, or GOPs, and information about the maximum depth can be inserted in a header of a bit stream. [00068] In the video coding apparatus 100, the deepest coding unit can be a coding unit obtained by dividing a height or width of a coding unit of a higher depth, which is a layer above, by two. In other words, when the dimension of the current depth coding unit is 2N x 2N, the dimension of the shortest depth coding unit is N x N. In addition, the current depth coding unit having the dimension of 2N x 2N it can include a maximum number of four coding units of the smallest depth. [00069] Consequently, the video coding apparatus 100 can form the coding units having the tree structure by determining coding units with an optimal shape and an optimum size for each larger coding unit, based on the dimension of the larger coding unit and the maximum depth determined having characteristics of the current structure. In addition, since encoding can be performed on each major encoding unit using any of the various prediction and transformation modes, an optimal encoding mode can be determined by considering the characteristics of the multi-dimensional image encoding unit. [00070] Thus, if an image having a high resolution or a large amount of data is encoded in a conventional macroblock, a number of macroblocks per image increases excessively. Thus, a number of pieces of compressed information generated for each macroblock increases and, therefore, it is difficult to transmit the compressed information and the efficiency of data compression decreases. However, when using the video encoding device 100, the compression efficiency of the image can be increased once a coding unit is adjusted at the same time, considering the characteristics of an image while increasing a maximum dimension of an image. encoding unit while considering the image size. [00071] FIG. 2 is a block diagram of a video decoding apparatus 200 according to an embodiment of the present invention. [00072] The video decoding apparatus 200 includes an analyzer 210, an entropy decoder 220, and a hierarchical decoder 230. Definitions of various terms, such as an encoding unit, a depth, a prediction unit, a transformation unit, and information on the various encoding modes, for various operations of the video decoding apparatus 200 are identical to those described with reference to FIG. 1 and video encoding apparatus 100. [00073] Analyzer 210 receives a bit stream from an encoded video to analyze an element of syntax. The entropy decoder 220 decodes the syntax elements arithmetically indicating the encoded image data based on units that have a structure by performing entropy decoding of the analyzed syntax elements, and produces the arithmetically decoded syntax elements for the hierarchical decoder 230. That is, the entropy decoder 220 performs entropy decoding of the syntax elements that are received in the form of bit strings 0 and 1, thus reconstructing the syntax elements. [00074] Furthermore, the entropy decoder 220 extracts information from about an encoded depth, an encoding mode, the color component information, prediction mode information, etc., to the encoding units having a structure of tree according to each major coding unit, from the bit stream analyzed. The information extracted about the encoded depth and the encoding mode is produced for the hierarchical decoder 230. The image data in a bit stream is divided into the largest encoding unit so that the hierarchical decoder 230 can decode the image data for each larger coding unit. [00075] Information about the coded depth and the coding mode according to the largest coding unit can be information about at least one coding unit corresponding to the coded depth, and information about a coding mode can include information about a partition type of a corresponding coding unit that corresponds to the coded depth, over a prediction mode, and a dimension of a transformation unit. In addition, the division information according to the depth can be extracted as the information about the coded depth. [00076] Information about the coded depth and the coding mode according to each major coding unit extracted by the entropy decoder 220 is information about a coded depth and a coding mode determined to generate a minimum coding error, when an encoder, such as the video encoding apparatus 100, repeatedly performs encoding for each deepest encoding unit according to the depths according to each major encoding unit. As a result, the video decoding apparatus 200 can reconstruct an image by decoding the image data according to an encoded depth and an encoding mode that generates the minimum encoding error. [00077] Since the coding information about the coded depth and the coding mode can be assigned to a predetermined data unit out of a corresponding coding unit, a prediction unit, and a minimum unit, the entropy decoder 220 you can extract information about the encoded depth and the encoding mode according to the predetermined data units. When information about a coded depth and the coding mode of a corresponding larger coding unit is assigned to each of the predetermined data units, the predetermined data units for which the same information about the coded depth and the coding mode assigned can be inferred to be the data units included in the same larger coding unit. [00078] Furthermore, as will be described below, the entropy decoder 220 can entropy decode a significant cbf transformation unit coefficient indicator using a context model determined based on a transformation depth of a transformation unit. An entropy decoding operation of the syntax elements referring to a transformation unit in the entropy decoder 220 will be described below. [00079] The hierarchical decoder 230 reconstructs the current structure by decoding the image data in each larger coding unit based on the information about the coded depth and the coding mode according to the larger coding units. In other words, the hierarchical decoder 230 can decode the encoded image data based on the extracted information about the partition type, the prediction mode, and the transformation unit for each encoding unit among the encoding units having the structure of tree included in each larger coding unit. The decoding operation can include prediction including intra prediction and motion compensation and reverse transformation. [00080] The hierarchical decoder 230 can perform intra prediction or motion compensation according to a partition and a prediction mode of each coding unit, based on information about the partition type and the prediction mode of the prediction unit of the coding unit according to the coded depths. [00081] In addition, the hierarchical decoder 230 can perform the reverse transformation according to each processing unit in the coding unit, based on information about the size of the processing unit of the coding unit according to the coded depths, in order to perform the reverse transformation according to the larger coding units. [00082] The hierarchical decoder 230 can determine at least one coded depth of a larger current coding unit using division information according to the depths. If the split information indicates that the image data is no longer divided at the current depth, the current depth is an encoded depth. Consequently, the hierarchical decoder 230 can decode the encoding unit of the current depth with respect to the image data of the current largest encoding unit based on information about the partition type of the prediction unit, the prediction mode, and the dimension of the processing unit. [00083] In other words, the data units containing the encoding information, including the same division information can be collected by observing the defined encoding information assigned to the predetermined data unit among the encoding unit, the prediction, and the minimum unit, and the collected data units can be considered to be a data unit to be decoded by hierarchical decoder 230 in the same encoding mode. [00084] The video decoding apparatus 200 can obtain information about at least one encoding unit that generates the minimum encoding error when encoding is recursively performed for each larger encoding unit, and can use the information to decode the current structure. In other words, the encoded image data from the coding units having the tree structure determined to be the optimal coding units in each larger coding unit can be decoded. [00085] In this way, even if the image data has a high resolution and a large amount of data, the image data can be efficiently decoded and reconstructed using an encoding unit size and an encoding mode, the which are adaptively determined according to the characteristics of the image data, using information about an optimal encoding mode received from an encoder. [00086] A method of determining coding units having a tree structure, a prediction unit, and a transformation unit, according to an embodiment of the present invention, will now be described with reference to FIGS, 3 through 13 . [00087] FIG. 3 is a diagram depicting a concept of coding units according to an embodiment of the present invention. [00088] One dimension of a coding unit can be expressed in height x width, and can be 64 * 64, 32 x 32, 16 x 16, and 8 x 8. The 64 x 64 coding unit can be divided on 64 x 64, 64 x 32, 32 x 64 or 32 x 32 partitions; and a 32 x 32 encoding unit can be divided into 32 x 32, 32 x 16, 16 x 32, or 16 x 16 partitions; a 16 x 16 encoding unit can be divided into 16 x 16, 16 x 8, 8 x 16, or 8 x 8 partitions; and an 8 x 8 encoding unit can be divided into 8x8, 8x4, 4x8or4x4 partitions. [00089] With respect to video data 310, a resolution of 1920 x 1080, of a maximum dimension of an encoding unit 64, and a maximum depth of 2 are defined. With respect to video data 320, a resolution of 1920 x 1080, of a maximum dimension of an encoding unit 64, and a maximum depth of 3 are defined. With respect to video data 330, a resolution of 352 x 288, a maximum dimension of an encoding unit 16, and a maximum depth of 1 are defined. The maximum depth shown in FIG. 3 indicates a total number of divisions from a larger coding unit to a smaller coding unit. [00090] If a resolution is high or the amount of data is large, the maximum size of a coding unit can be large, so as to not only increase the coding efficiency, but also to accurately reflect the characteristics of an image . As a result, the maximum size of the video data encoding unit 310 and 320 having the highest resolution than video data 330 can be 64. [00091] Since the maximum depth of video data 310 is 2, encoding units 315 of video data 310 may include a larger encoding unit having a long axis dimension of 64, and encoding units having shaft length dimensions 32 and 16 as depths are deepened for two layers, dividing the larger coding unit twice. However, since the maximum depth of video data 330 is 1, encoding units 335 of video data 330 may include a larger encoding unit having a long axis dimension of 16, and encoding units having a long axis dimension of 8 as depths are deepened to a layer by dividing the larger coding unit once. [00092] Since the maximum depth of video data 320 is 3, encoding units 325 of video data 320 may include a larger encoding unit having a long axis dimension of 64 and encoding units having dimensions long axis of 32, 16, and 8 as depths are deepened to 3 layers by dividing the largest coding unit three times. As a depth deepens, detailed information can be precisely expressed. [00093] FIG. 4 is a block diagram of a video encoder 400 based on encoding units having a hierarchical structure, according to an embodiment of the present invention. [00094] An intrapredictor 410 performs intrapredicting in coding units in an intra mode, with respect to a current structure 405, and a motion estimator 420 and a motion compensator 425 perform, respectively, the inter-estimation and motion compensation in the units coding in an intra mode using the current structure 405 and a reference structure 495. [00095] Data output from intrapredictor 410, motion estimator 420, and motion compensator 425 is produced as a quantized transformation coefficient through a 430 transformer and a 440 quantifier. The quantized transformation coefficient is reconstructed as data in a spatial domain through an inverse quantifier 460 and an inverse transformer 470, and the reconstructed data in the spatial domain are transmitted as the reference structure 495 after being post-processed through an unlock filter 480 and a loop filter 490. The quantized transformation coefficient can be produced as a bit stream 455 through an entropy encoder 450. [00096] The entropy coding unit 450 arithmetically encodes syntax elements referring to a transformation unit, such as a significant transformation unit coefficient (cbf) indicator, indicating whether a transformation coefficient other than 0 is included in a transformation unit, a significance map indicating a location of a transformation coefficient other than 0, a first critical value indicator (coeff_abs_level_greaterl_flag) that indicates whether a transformation coefficient has a value greater than 1, a second value indicator critical (coeff_abs_level_greather2_flag) indicating whether a transformation coefficient has a value greater than 2, and a dimension information for a transformation coefficient (coeff_abs_level_remaining) corresponding to a difference between a base level (BaseLevel) which is determined based on the the first critical value indicator and the second critical value indicator and a coe real transformation (abscoeff). [00097] For the video encoder 400 to be applied to the video encoding apparatus 100, all elements of the video encoder 400, that is, the intrapredictor 410, the motion estimator 420, the motion compensator 425, the transformer 430, quantifier 440, entropy encoder 450, inverse quantifier 460, inverse transformer 470, release filter 480, and loop filter 490, must perform operations based on each coding unit between the units of coding having a tree structure when considering the maximum depth of each larger coding unit. [00098] Specifically, the intrapredictor 410, the motion estimator 420, and the motion compensator 425 determine partitions and a prediction mode for each coding unit among the coding units having a tree structure, considering the maximum dimension and the maximum depth of a larger current coding unit, and transformer 430 determines the size of the transformation unit in each coding unit among the coding units having a tree structure. [00099] FIG. 5 is a block diagram of a video decoder 500 based on encoding units, according to an embodiment of the present invention. [000100] An analyzer 510 analyzes the encoded image data to be decoded and the encoding information required for decoding, from a 505 bit stream. The encoded image data passes through decoder 520 and inverse quantifier 530 to produced as inversely quantified data. The entropy decoder 520 obtains the elements related to a transformation unit from a bit stream, that is, from a significant transformation unit coefficient (cbf) indicator, indicating a transformation coefficient other than 0 is included in a transformation unit, a map of significance that indicates a location of a transformation coefficient other than 0, a first critical value indicator (coeff_abs__level_greaterl_flag) that indicates whether a transformation coefficient has a value greater than 1, one second critical value indicator (coeff_abs_level_greather2_flag) indicating whether a transformation coefficient has a value greater than 2, and a dimension information for a transformation coefficient (coeff_abs_level_remaining) corresponding to a difference between a base level (BaseLevel) that is determined based on the first critical value indicator and the second critical value indicator and a coefficient of three real information (abscoeff), and arithmetically decodes the syntax elements obtained in order to reconstruct the syntax elements. [000101] An inverse transformer 540 reconstructs the inversely quantified data for the image data in a spatial domain. An intrapredictor 550 performs intrapredicting in coding units in an intra-mode with respect to image data in the spatial domain, and a motion compensator 560 performs the motion compensation in coding units in an inter mode using a 585 reference structure. [000102] Image data in the spatial domain, which passed through intrapreditor 550 and motion compensator 560, can be processed as a reconstructed structure 595 after being post-processed through an unlock filter 570 and a filter for loop 580. In addition, the image data that is post-processed through the unlock filter 570 and the loop filter 580 can be produced as the reference frame 585. [000103] For the video decoder 500 to be applied to the video decoding apparatus 200, all elements of the video decoder 500, that is, the analyzer 510, the entropy decoder 520, the inverse quantizer 530, the inverse transformer 540, the intrapreditor 550, the motion compensator 560, the unlock filter 570, and the loop filter 580, perform operations based on coding units having a tree structure for each larger coding unit. [000104] The intrapredictor 550 and the motion compensator 560 determine a partition and a prediction mode for each coding unit having a tree structure, and the reverse transformer 54 0 must determine a size of a transformation unit for each unit coding. [000105] FIG. 6 is a diagram illustrating the deepest coding units according to depths, and partitions, according to an embodiment of the present invention. [000106] The video encoding apparatus 100 and the video decoding apparatus 200 use hierarchical encoding units in order to consider the characteristics of an image. The maximum height, maximum width and maximum depth of the coding units can be determined adaptively according to the characteristics of the image, or they can be defined differently by a user. The dimensions of the deepest coding units according to the depths can be determined according to the maximum predetermined dimension of the coding unit. [000107] In a hierarchical structure 600 of the coding units according to one embodiment of the present invention, the maximum height and the maximum width of the coding units are each 64, and the maximum depth is 4. Since the depth deepens along a vertical axis of hierarchical structure 600, each height and width of the deepest coding unit are divided. In addition, a prediction unit and the partitions, which are bases for prediction coding for each deeper coding unit, are shown along a horizontal axis of the hierarchical structure 600. [000108] In other words, a coding unit 610 is a larger coding unit in hierarchical structure 600, where the depth is 0 and one dimension, that is, a height by width, is 64 x 64. The depth deepens along the vertical axis, and a coding unit 620 with a dimension of 32 x 32 and a depth of 1, a coding unit 63 0 with a dimension of 16 x 16 and a depth of 2, a coding unit 640 with a dimension of 8 x 8 and a depth of 3, and a coding unit 650 with a dimension of 4 x 4 and a depth of 4 exist. The coding unit 650 with the dimension of 4 x 4 and the depth of 4 is a smaller coding unit. [000109] The prediction unit and partitions of an encoding unit are arranged along the horizontal axis according to each depth. In other words, if the encoding unit 610 with the dimension of 64 x 64 and the depth of 0 is a prediction unit, the prediction unit can be divided into partitions included in the encoding unit 610, that is, a partition 610 with a dimension of 64 x 64, partitions 612 with a dimension of 64 x 32, partitions 614 with a dimension of 32 x 64, or partitions 616 with a dimension of 32 x 32. [000110] Likewise, a prediction unit of the coding unit 620 with a dimension of 32 x 32 and a depth of 1 can be divided into partitions included in the coding unit 620, that is, a partition 620 having a dimension 32 x 32, partitions 622 having a size of 32 x 16, partitions 624 having a dimension of 16 x 32 and partitions 626 having a dimension of 16 x 16. [000111] Likewise, a prediction unit of the 630 encoding unit with a dimension of 16 x 16 and a depth of 2 can be divided into partitions included in the encoding unit 630, that is, a partition with a dimension of 16 x16 included in the encoding unit 630, partitions 632 with a size of 16 x 8, partitions 634 with a size of 8 x 16, and partitions 63 6 having a size of 8 x 8. [000112] Likewise, a prediction unit of the 640 coding unit with the dimension of 8 x 8 and a depth of 3 can be divided into partitions included in the coding unit 640, that is, a partition with a dimension of 8 x 8, included in the coding unit 640, partitions 642 with a size of 8 x 4, partitions 644 with a size of 4 * 8, and partitions 646 having a size of 4 x 4. [000113] The 650 encoding unit with the size of 4 x 4 and the depth of 4 is the smallest encoding unit and a lowest depth encoding unit. A prediction unit of the encoding unit 650 is only assigned to a partition with a dimension of 4 x 4. [000114] In order to determine at least one coded depth of the coding units that constitute the major coding unit 610, the hierarchical encoder 110 of the video coding apparatus 100 performs coding for the coding units corresponding to each included depth in the larger coding unit 610. [000115] The number of deeper coding units according to depth, including data in the same range and in the same dimension, increases as the depth deepens. For example, four coding units corresponding to a depth of 2 are needed to cover data that is included in a unit corresponding to a coding depth 1. Consequently, in order to compare the results of coding the same data according to the depth, the coding unit corresponding to the depth of 1 and four coding units corresponding to the depth of 2 are each coded. [000116] In order to perform the coding for a current depth between the depths, a minimum coding error can be selected for the current depth by performing the coding, for each prediction unit in the coding units corresponding to the current depth, over of the horizontal axis of the hierarchical structure 600. Alternatively, the minimum coding error can be searched by comparing the minimum coding errors according to the depth and can perform the coding for each depth as the depth deepens along the vertical axis of hierarchical structure 600. A depth and a partition having the minimum coding error in the larger coding unit 610 can be selected as the coded depth and a partition type of the larger coding unit 610. [000117] FIG. 7 is a diagram for describing a relationship between a coding unit 710 and transformation units 720, according to an embodiment of the present invention. [000118] The video encoding apparatus 100 or the video decoding apparatus 200 encodes or decodes an image of each larger encoding unit according to the encoding units having dimensions equal to or less than the dimension of the larger encoding unit . The dimensions of the transformation units for transformation during encoding can be selected based on data units that are no larger than a corresponding encoding unit. [000119] For example, in video encoding apparatus 100 or video decoding apparatus 200, if the dimension of encoding unit 710 is 64 x 64, the transformation can be carried out using transformation units 720 with a dimension 32x32. [000120] In addition, the data from the 710 coding unit with the dimension of 64 x 64 can be encoded by performing the transformation in each of the transformation units with the dimensions of 32 x 32, 16 x 16, 8 x 8, and 4 x 4, which are smaller than 64 x 64, and then a transformation unit having the minimum coding error can be selected. [000121] FIG. 8 is a diagram describing the coding information of the coding units corresponding to a coded depth, according to an embodiment of the present invention. [000122] An output unit 130 of the video encoding apparatus 100 can encode and transmit information 800 about a partition type, information 810 about a prediction mode, and information 820 about the size of a transformation unit for each coding unit corresponding to a coded depth, such as information about a coding mode. [000123] Information 800 indicates information about a partition shape obtained by dividing a prediction unit from a current encoding unit, where the partition is a prediction data unit that encodes the current encoding unit. For example, a current CU_0 coding unit with a dimension of 2N x 2N can be divided into any one of an 802 partition with a dimension of 2N x 2N, a partition 804 with a dimension of N x 2N, a partition 806 with a dimension dimension of N x 2N, and a partition 808 with a dimension of N x N. Here, information 800 about a type of partition is defined to indicate one of the partition 802 having a dimension of 2N x 2N, partition 804 having a dimension of N x 2N, partition 806 having a dimension of N x 2N, and partition 808 having a dimension of N x N. [000124] Information 810 indicates a prediction mode for each partition. For example, information 810 may indicate a prediction encoding mode performed on a partition indicated by information 800, that is, an intra mode 812, an inter mode 814, or a skip mode 816. [000125] Information 820 indicates a transformation unit to be based on when the transformation is carried out in a current coding unit. For example, the transformation unit can be a first intra 822 transformation unit, a second intra 824 transformation unit, a first inter 826 transformation unit, or a second inter 828 transformation unit. [000126] The encoding data and image data extraction unit 210 of the video decoding apparatus 200 can extract and use information 800 on the encoding units, information 810 on a prediction mode, and information 820 on a size of a processing unit, for decoding, according to each deepest coding unit. [000127] FIG. 9 is a diagram of deeper coding units according to depth, according to an embodiment of the present invention. [000128] Division information can be used to indicate a change in depth. The split information indicates whether a coding unit of a current depth is divided into coding units of a lesser depth. [000129] A prediction unit 910 for prediction encoding of an encoding unit 900 with a depth of 0 and a dimension of 2N_0 x 2N_0 can include partitions of a partition type 912 having a dimension of 2N_0 x 2N_0, a type of partition 914 having a dimension of 2N_0 x N_0, a partition type 916 having a dimension of N_0 x 2N_0, and a partition of type 918 with a dimension of N_0 x N_0. FIG. 9 illustrates only partition types 912 to 918 that are obtained by symmetrically dividing prediction unit 910, but a partition type is not limited to them, and prediction unit 910 partitions can include asymmetric partitions, partitions with a shape predetermined, and partitions with a geometric shape. [000130] Prediction coding is performed repeatedly on a partition with a dimension of 2N_0 x 2N_0, two partitions having a dimension of 2N_0 x N_0, two partitions having a dimension of N_0 x 2N_0, and four partitions having a dimension of N_0 x N_0, according to each partition type. Prediction coding in an intra and an inter mode can be performed on partitions having the dimensions of 2N 0 x 2N_0, N_0 x 2N 0, 2N 0 x N__0, and N_0 x N_0. Prediction encoding in a jump mode is performed only on the partition with the dimension of 2N_0 x 2N 0. [000131] If a coding error is the smallest in one of partition types 912 to 916 having the dimensions of 2N_0 x 2N 0, 2N 0 x N_0, and N_0 x 2N_0, the prediction unit 910 may not be divided into one smaller depth. [000132] If the coding error is the smallest in partition type 918 with the dimension of N_0 x N_0, the depth is changed from 0 to 1 to divide partition type 918 in operation 920, and the coding is performed repeatedly in coding partition type units that have a depth of 2 and a dimension of N_0 x N_0 to look for a minimum coding error. [000133] A prediction unit 940 for prediction encoding of the (partition type) encoding unit 930 with a depth of 1 and a dimension of 2N_1 * 2N_1 (= N_0 * N_0) can include partitions of a partition type 942 having a dimension of 2N_1 x 2N_1, a partition type 94 4 having a dimension of 2N_1 x N_l, a partition type 94 6 having a dimension of N_1 x 2N_1, and a partition of type 94 8 having a dimension of N_1 x N_1. [000134] If a coding error is the smallest in partition type 948 with the dimension of N_1 x N_l, a depth is changed from 1 to 2 to divide partition type 948 in operation 950, and coding is performed repeatedly in coding units 960, having a depth of 2 and a dimension of N_2 x N_2 to search for a minimum coding error. [000135] When a maximum depth is d, the division operation according to each depth can be performed even when a depth becomes d-1, and the division information can be coded as even when a depth is one from 0 to d-2. In other words, when the coding is performed until the depth is d-1 after a coding unit corresponding to a depth of d-2 is divided in operation 970, a prediction unit 990 for prediction coding of a unit of coding 980 having a depth of d-1 and a dimension of 2N_ (d-1) x2N_ (d-1) can include partitions of a partition type 992 having a dimension of 2N_ (d-1) x2N_ (d-1) , a partition type 994 having a dimension of 2N_ (d-1) xN_ (d-1), a partition type 996 having a dimension of N_ (d-1) x2N_ (d-1), and a partition of type 998 with a dimension of N (dl) xN (d-1). [000136] Prediction coding can be performed repeatedly on a partition with a dimension of 2N (d-1) x 2N_ (dl), two partitions having a dimension of 2N (d-1) x N (d-1), two partitions having a dimension of N (d-1) x 2N (d-1), four partitions having a dimension of N_ (d-1) x N (d-1), among the partition types from 992 to 998 for search for a partition type having a minimal coding error. [000137] Even when partition type 998 with the dimension of N_ (d-1) x N_ (d-1) has the minimum coding error, since a maximum depth is d, the CU coding unit (d -1) having a depth of d-1 is no longer divided to a lesser depth, and a coded depth for the coding units that make up the current largest coding unit 900 is determined to be d-1 and a partition type of the current largest coding unit 900 can be determined to be N_ (d-1) x N_ (d-1). In addition, since the maximum depth is d, the split information for the smaller coding unit 952 is not defined. [000138] The data unit 999 can be a 'minimum unit' for the current largest coding unit. A minimum unit according to one embodiment of the present invention can be a rectangular data unit obtained by dividing the smaller coding unit 980 by 4. When performing the coding repeatedly, the video coding apparatus 100 can select a depth that has the error minimum coding errors, comparing coding errors according to the depths of the coding unit 900 to determine a coded depth, and define a corresponding partition type and prediction mode as a coded depth coding mode. [000139] As such, the minimum coding errors according to the depths are compared at all depths from 1 to d, and a depth having the minimum coding error can be determined as a coded depth. The encoded depth, the partition type of the prediction unit, and the prediction mode can be encoded and transmitted as information about the encoding mode. In addition, since a coding unit is divided from a depth of 0 to a coded depth, only the coded depth division information is set to 0, and the depth division information excluding the coded depth is set to 1 . [000140] The entropy decoder 220 of the video decoding apparatus 200 can extract and use information about the encoded depth and the prediction unit of the encoding unit 900 to decode the encoding unit 912. The video decoding apparatus 200 can determine a depth, in which the split information is 0, as a coded depth using split information according to the depth, and use the information about a corresponding depth coding mode for decoding. [000141] FIGS. 10 to 12 are diagrams that describe a relationship between coding units 1010, prediction units 1060, and transformation units 1070 according to an embodiment of the present invention. [000142] The coding units 1010 are coding units having a tree structure, corresponding to the coded depths determined by the video coding apparatus 100, in the larger coding unit. Prediction units 1060 are partitions of prediction units from each of the 1010 encoding units, and transformation units 1070 are transformation units from each of the 1010 encoding units. [000143] When a depth of a greater coding unit is 0 in coding units 1010, the depths of coding units 1012 and 1054 are 1, the depths of coding units 1014, 1016, 1018, 1028, 1050, and 1052 are 2, the depths of the coding units 1020, 1022, 1024, 1026, 1030, 1032, and 1048 are 3, and the depths of the coding units 1040, 1042, 1044, and 1046 are 4. [000144] In the prediction units 1060, some coding units 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 are obtained by dividing the coding units. In other words, the partition types in the coding units 1014, 1022, 1050, and 1054 have a dimension of 2N x N, the partition types in the coding units 1016, 1048, and 1052 have a dimension of N x 2N, and a 1032 coding unit partition type has a dimension of N x N. The prediction units and partitions of the 1010 coding units are the same or smaller than each coding unit. [000145] The transformation or reverse transformation is performed on the image data of the 1052 encoding unit on the 1070 processing units on a data unit that is smaller than the 1052 encoding unit. In addition, the 1014, 1016 encoding units , 1022, 1032, 1048, 1050, 1052, and 1054 in transformation units 1070 are different from those in prediction units 1060 in terms of dimensions and shapes. In other words, the video encoding apparatus 100 and the video decoding apparatus 200 may perform intraprediction, motion estimation, motion compensation, transformation, and reverse transformation individually in a data unit, in the same unit. coding. [000146] In this way, encoding is performed recursively on each of the coding units having a hierarchical structure in each region of a larger coding unit to determine an optimal coding unit, and thus the coding units having a recursive tree structure can be obtained. Coding information can include split information about a coding unit, information about a partition type, information about a prediction mode, and information about the size of a transformation unit. [000147] Table 1 shows the encoding information that can be set by the video encoding apparatus 100 and the video decoding apparatus 200. Table 1 [000148] The entropy encoder 120 of the video encoding apparatus 100 can output the encoding information about the encoding units having a tree structure, and the entropy decoder 220 of the video decoding apparatus 200 may extract the information from encoding on the encoding units having a tree structure of a received bit stream. [000149] The split information indicates whether a current coding unit is divided into coding units of a lesser depth. If the division information for a current depth d is 0, a depth, in which a current coding unit is no longer divided into a smaller depth, is a coded depth, and so information about a partition type, a mode of prediction, and a dimension of a transformation unit can be defined for the coded depth. If the current coding unit is further divided according to the division information, the coding is carried out independently into four division coding units of less depth. [000150] A prediction mode can be one of an intra mode, an inter mode, and a skip mode. The intra mode and the inter mode can be set on all partition types, and the skip mode is set only on a partition type with a dimension of 2N x 2N. [000151] Information about the partition type can indicate types of symmetric partitions having the dimensions of 2Nx2N, 2NxN, Nx2N, and NxN, which are obtained by symmetrically dividing a height or width of a prediction unit, and the types of asymmetric partitions having the dimensions of 2NxnU, 2NxnD, nLx2N, and nRx2N, which are obtained by asymmetrically dividing the height or width of the prediction unit. The types of asymmetric partitions having the dimensions of 2NxnU and 2NxnD can be obtained, respectively, by dividing the height of the prediction unit into 1: nen: l (where n is an integer greater than 1), and the types of asymmetric partitions having the dimensions of nLx2N and nRx2N can be obtained, respectively, by dividing the width of the prediction unit into l: nen: 1. [000152] The size of the transformation unit can be defined to be of two types in the intra mode and of two types in the inter mode. In other words, if the transformation unit's split information is 0, the size of the transformation unit can be 2N x 2N, which is the size of the current coding unit. If the transformation unit division information is 1, the transformation units can be obtained by dividing the current coding unit. In addition, if a partition type of the current coding unit with the dimension of 2N x 2N is a symmetric partition type, a dimension of a transformation unit can be NxN, and if The partition type of the current coding unit is a type of asymmetric partition, the size of the transformation unit can be N / 2 x N / 2. [000153] The coding information about the coding units having a tree structure can include at least one of a coding unit corresponding to a coded depth, a prediction unit, and a minimum unit. The coding unit corresponding to the coded depth can include at least one of a prediction unit and a minimum unit containing the same coding information. [000154] Thus, it is determined whether the adjacent data units are included in the same coding unit corresponding to the coded depth compared to the information coding of the adjacent data units. In addition, a coding unit corresponding to a corresponding coded depth is determined using the coding information from a data unit and, therefore, a distribution of the coded depths in a larger coding unit can be determined. [000155] Thus, if a current coding unit is predicted based on the coding information of the adjacent data units, the coding information of the data units in the deeper coding units adjacent to the current coding unit can be directly referred to and used. [000156] Alternatively, if a current coding unit is predicted based on the coding information from the adjacent data units, the data units adjacent to the current coding unit are searched using the coded information from the data units, and the units adjacent encoded sources can be referred to to predict the current encoding unit. [000157] FIG. 13 is a diagram for describing a relationship between a coding unit, a prediction unit, and a transformation unit according to the coding mode information in Table 1. [000158] A larger encoding unit 1300 includes encoding units 1302, 1304, 1306, 1312, 1314, 1316, and 1318 of encoded depths. Here, since the coding unit 1318 is a coding unit of a coded depth, the split information can be set to 0. Information about a partition type of the coding unit 1318 having a dimension of 2N x 2N can be be defined to be one of a partition type 1322 having a dimension of 2N x 2N, a partition type 1324 having a dimension of N x 2N, a partition type 132 6 having a dimension of N x 2N, a type of partition 1328 having a dimension of N x N, a partition type 1332 having a dimension of 2N x nU, a partition type 1334 having a dimension of 2N x nD, a partition type 1336 having a dimension of nL x 2N, and a type 1338 partition having a dimension of nR x 2N. [000159] When the partition type is defined to be symmetrical, that is, the partition type 1322, 1324, 1326, or 1328, a 1342 transformation unit having a dimension of 2N x 2N is defined if the divided information (indicator of a transformation unit is 0 and a transformation unit 1344 having a dimension of N x N is defined if a TU dimension indicator is 1. [000160] When the partition type is defined to be asymmetric, that is, the partition type 1332, 1334, 1336, or 1338, a 1352 transformation unit having a dimension of 2N x 2N is defined if a TU dimension indicator is 0, and a transformation unit 1354 having a dimension of N / 2 x N / 2 is defined, if a TU dimension indicator is 1. [000161] The TU dimension indicator is a type of transformation index; a dimension of a transformation unit corresponding to a transformation index can be modified according to a type of prediction unit or a partition type of a coding unit. [000162] When the partition type is defined to be symmetric, that is, the partition type 1322, 1324, 1326, or 1328, the processing unit 1342 having a dimension of 2N x 2N is defined if a TU dimension indicator of a transformation unit is 0, and the transformation unit 1344 having a dimension of N x N is defined if a TU dimension indicator is 1. [000163] When the partition type is defined to be asymmetric, that is, the partition type 1332 (2NxnU), 1334 (2NxnD), 1336 (nLx2N), or 1338 (nRx2N), the 1352 transformation unit with a dimension 2N x 2N is defined if a TU dimension indicator is 0, and transformation unit 1354 having a N / 2 x N / 2 dimension is defined, if a TU dimension indicator is 1. [000164] With reference to FIG. 9, the TU dimension indicator described above is an indicator having a value of 0 or 1, but the TU dimension indicator is not limited to 1 bit, and a transformation unit can be divided hierarchically while the TU dimension indicator increases from 0. The transformation unit division information (TU dimension indicator) can be used as an example of a transformation index. [000165] In this case, when a TU dimension indicator, according to a modality, is used with a maximum dimension and a minimum dimension of a transformation unit, the dimension of the transformation unit actually used can be expressed. The video encoding apparatus 100 can encode the dimension information of the larger transformation unit, the dimension information of the smaller transformation unit, and the division information of the larger transformation unit. Dimension information for the largest encoded transformation unit, dimension information for the smaller transformation unit, and division information for the largest transformation unit can be entered into a set of sequence parameters (SPS). The video decoding apparatus 200 can use the dimension information of the larger transformation unit, the dimension information of the smaller transformation unit, and the division information of the larger transformation unit for video decoding. [000166] For example, (a) if a dimension of a current coding unit is 64x64 and a larger transformation unit is 32x32, (a-1) if a dimension of a transformation unit is 32x32 if an indicator TU dimension is 0; (a-2) if a dimension of a transformation unit is 16x16 if a TU dimension indicator is 1; and (a-3) if a dimension of a transformation unit is 8x8 if a TU dimension indicator is 2. [000167] Alternatively, (b) if a dimension of a current encoding unit is 32 x 32 and a smaller transformation unit is 32x32, (b — 1) if a dimension of a transformation unit is 32x32 if a TU dimension indicator is 0, and since the size of a transformation unit cannot be less than 32 x 32, no other TU dimension indicator can be defined. [000168] Alternatively, (c) if a dimension of a current coding unit is 64x64 and a maximum TU dimension indicator is 1, a TU dimension indicator can be 0 or 1 and no other TU dimension indicator can be defined. [000169] Consequently, when defining a maximum TU dimension indicator 'MaxTransformSizelndex', a minimum TU dimension indicator as 'MinTransformSize', and a transformation unit in the case where a TU dimension indicator is 0, that is, the RootTu root transformation unit as 'RootTuSize', a dimension of a smaller 'CurrMinTuSize' transformation unit, which is available in a current coding unit, can be defined by equation (1) below. CurrMinTuSize = max (MinTransformSize, RootTuSize / (2AMaxTransformSizelndex)) (1) [000170] Compared to the smaller transformation unit dimension 'CurrMinTuSize' that is available in the current coding unit, the dimension of the root transformation unit 'RootTuSize', which is a dimension of a transformation unit when an indicator TU dimension is 0, it can indicate a larger transformation unit that can be selected with respect to a system. That is, according to Equation (1), 'RootTuSize / (2AMaxTransformSizeIndex)' is a dimension of a transformation unit that is obtained by dividing 'RootTuSize', which is a dimension of a transformation unit when the information of division of the transformation unit is 0, by the number of times of division corresponding to the division information of the largest transformation unit, and 'MinTransformSize' is a dimension of a smaller transformation unit, so a smaller value of these can be 'CurrMinTuSize ', which is the size of the smallest transformation unit that is available in the current coding unit. [000171] The size of the transformation unit at the root 'RootTuSize' according to one embodiment of the present invention can vary according to a prediction mode. [000172] For example, if a current prediction mode is an inter mode, RootTuSize can be determined according to equation (2) below. In equation (2), 'MaxTransformSize're refers to a dimension of the larger transformation unit, and' PUSize're refers to a dimension of the current prediction unit. RootTuSize = min (MaxTransformSize, PUSize) (2) [000173] In other words, if a current prediction mode is an inter mode, the dimension of the root transformation unit dimension 'RootTuSize', which is a transformation unit if a TU dimension indicator is 0, can be defined as a smaller value between the dimension of the larger transformation unit and the dimension of the current prediction unit. [000174] If a prediction mode for a current partition drive is an intra mode, 'RootTuSize' can be determined according to Equation (3) below. 'Partitionsize're refers to a dimension of the current partition unit. RootTuSize = min (MaxTransformSize, PartitionSize) (3) [000175] In other words, if a current prediction mode is an intra mode, the dimension of the root transformation unit 'RootTuSize' can be defined as a smaller value among the dimension of the larger transformation unit and the dimension of the unit of transformation current partition. [000176] However, it should be noted that the dimension of the dimension of the root transformation unit 'RootTuSize', which is the size of the largest current transformation unit according to one embodiment of the present invention and varies according to one prediction of a partition unit, is an example, and factors to determine the size of the current largest transformation unit are not limited to these. [000177] An entropy encoding operation of a syntax element, which is performed by the entropy encoder 120 of the video encoding apparatus 100 of FIG. 1, and an entropy decoding operation of a syntax element, which is performed by the entropy decoder 220 of the video decoding apparatus 200 of FIG. 2, will now be described in detail. [000178] As described above, the video encoding apparatus 100 and the video decoding apparatus 200 perform encoding and decoding by dividing a larger encoding unit into encoding units that are equal to or less than the encoding unit. greater coding. A prediction unit and a transformation unit used in the prediction and transformation can be determined based on costs independently of the other data units. Since an optimal coding unit can be determined by recursively encoding each coding unit having a hierarchical structure included in the larger coding unit, data units having a tree structure can be configured. In other words, for each larger coding unit, a coding unit having a tree structure and a prediction unit and a transformation unit having each tree structure can be configured. For decoding, hierarchical information, which is information that indicates the structure information of data units having a hierarchical structure and non-hierarchical information for decoding, in addition to hierarchical information, needs to be transmitted. [000179] Information related to a hierarchical structure is information necessary to determine a coding unit having a tree structure, and a prediction unit having a tree structure, and a transformation unit having a tree structure, as described above with reference to FIGS. 10 to 12, and includes dimension information for a larger coding unit, coded depth, partition information for a prediction unit, a split indicator that indicates whether a coding unit is divided, dimension information for a transformation unit , and a split transformation indicator (split_transform_flag) indicating whether a transformation unit is divided into smaller transformation units for a transformation operation. Examples of coding information other than the hierarchical structure include the inter / intraprediction prediction mode information applied to each prediction unit, the motion vector information, the prediction direction information, the color component information applied to each data unit in the case where a plurality of color components are used and the information of the level of the transformation coefficient. Henceforth, hierarchical information and extra-hierarchical information can be referred to as a syntax element that must be encoded by entropy or decoded by entropy. [000180] In particular, according to the modalities of the present invention, a method for selecting a context model when a syntax element related to a transformation unit among the syntax elements is provided. An entropy encoding and decoding operation of the syntax elements related to a transformation unit will now be described in detail. [000181] FIG. 14 is a block diagram of an entropy coding apparatus 1400 according to an embodiment of the present invention. The entropy coding apparatus 1400 corresponds to the entropy encoder 120 of the video coding apparatus 100 of FIG. 1. [000182] With reference to FIG. 14, the entropy device encoding 1400 includes a binarizer 1410, a context modeler 1420, and a binary arithmetic encoder 1430. In addition, the binary arithmetic encoder 1430 includes a regular encoding mechanism 1432 and a bypass encoding mechanism 1434. [000183] When the syntax elements entered for the entropy coding apparatus 1400 are not binary values, binarizer 1410 binarizes the syntax elements in order to produce a string of bins consisting of binary values of 0 and 1. One bin denotes each bit of a stream consisting of 0 and 1, and is encoded by context-adaptive binary coding (CABAC). If a syntax element is given having the same probability between 0 and 1, the syntax element is produced for the bypass encoding mechanism 1434, which does not use a probability, to be encoded. [000184] The binarizer 1410 can use several methods of binarization according to the type of a syntax element. Examples of binarization methods may include a unary method, a truncated unary method, a truncated rice code method, a Golomb code method, and a fixed length code method. [000185] A cbf transformation unit significant coefficient indicator that indicates whether a non-zero transformation coefficient (hereinafter also referred to as a "significant coefficient") exists in a transformation unit is binarized using the fixed code method. That is, if a non-zero transformation coefficient exists in the transformation unit, the significant transformation unit coefficient indicator cbf is defined as having a value of 1. Otherwise, if a non-zero transformation coefficient does not exist in the transformation unit, the cbf transformation unit significant coefficient indicator is set to have a value of 0. If an image includes a plurality of color components, the cbf transformation unit significant coefficient indicator can be defined with respect to a processing unit for each color component. For example, if an image includes luminance (Y) and chrominance (Cb, Cr) components, a cbf_luma significant transformation unit coefficient indicator for a luminance component transformation unit, and a significant luminance component coefficient indicator. cbf_cb or cbf cr transformation of the chrominance component transformation unit can be defined. [000186] The context modeler 1420 provides a context model for encoding a bit string corresponding to a syntax element, for the regular encoding mechanism 1432. In more detail, the context modeler 1420 produces a probability of one binary value for encoding each binary value of a bit string of a current syntax element, for the binary arithmetic encoder 1430. [000187] A context model is a probability model of a bin, and includes information about which of 0 and 1 corresponds to a more likely symbol (MPS) and a less likely symbol (LPS), and the probability information of at least least one among MPS and the LPS. [000188] The context modeler 1420 can select a context model for entropy coding the cbf transformation unit significant coefficient indicator, based on a transformation depth of the transformation unit. If the size of the transformation unit is equal to the size of a coding unit, that is, if the transformation depth of the transformation unit is 0, the context modeler 1420 can determine a first predefined context model as a context model for entropy coding of the cbf transformation unit significant coefficient indicator. Otherwise, if the size of the transformation unit is smaller than the size of the coding unit, that is, if the transformation depth of the transformation unit is not 0, the context modeler 1420 can determine a second predefined context model as a context model for entropy coding of the cbf transformation unit significant coefficient indicator. Here, the first and second context models are based on different probability distribution models. That is, the first and second context models are different context models. [000189] As described above, when the cbf transformation unit significant coefficient indicator is encoded by entropy, the context modeler 1420 uses different context models in a case where the dimension of the transformation unit is equal to the dimension of the unit coding, and a case where the size of the processing unit is not the same as the size of the coding unit. If an index indicating one of a plurality of predefined context models for entropy coding of the cbf transformation unit significant coefficient indicator is referred to as a ctxldx context index, the ctxldx context index can have a value obtained by adding the a ctxlnc context augment parameter to determine a context model, and a predefined ctxIdxOffset context index offset. That is, ctxldx = ctxlnc + ctxIdxOffset. The context modeler 1420 can distinguish a case in which the transformation depth of the transformation unit is 0 from a case in which the transformation depth of the transformation unit is not 0, it can change the context increase parameter ctxlnc to determine a context model, from the transformation depth of the transformation unit, and thus can change the context index ctxldx to determine a context model for entropy encoding the cbf transformation unit significant coefficient indicator. [000190] In more detail, if the depth of the transformation is referred to as trafodepth, the context modeler 1420 can determine the context increase parameter ctxlnc based on the following algorithm. ctxlnc = (trafodepth == 0) 1: 0 [000191] This algorithm can be implemented by the following pseudocode. {If (trafodepth == 0) ctxlnc = l; else ctxInc = 0; } [000192] The cbf transformation unit significant coefficient indicator can be defined separately according to the luminance and chrominance components. As described above, a context model for entropy coding of the cbf_luma transformation unit significant coefficient indicator of the luminance component transformation unit can be determined using the ctxlnc context increase parameter which changes depending on whether the transformation depth of the transformation unit is 0. A context model for entropy coding of the transformation unit significant coefficient indicator cbf_cb or cbf cr of the chrominance component transformation unit can be determined using a trafodepth transformation depth value as the parameter of increased context ctxlnc. [000193] The regular encoding mechanism 1432 performs binary arithmetic encoding in a bit stream corresponding to a syntax element, based on information about the MPS and LPS and the probability information of at least one of the MPS and the LPS, which are included in the model scope provided from the context modeler 1420. [000194] FIG. 15 is a flow chart of an entropy encoding and decoding operation of a syntax element related to a transformation unit, according to an embodiment of the present invention. [000195] With reference to FIG. 15, in operation 1510, a significant transformation coefficient indicator cbf that indicates whether a transformation coefficient other than zero exists among the transformation coefficients included in a current transformation unit is initially encoded and decoded by entropy. As described above, a context model for entropy coding of the cbf transformation unit significant coefficient indicator can be determined based on a transformation depth of the transformation unit and the binary arithmetic encoding on the significant unit coefficient indicator of cbf transformation can be performed based on the given context model. [000196] If the cbf transformation unit significant coefficient indicator is 0, since only the transformation coefficients of 0 exist in the current transformation unit, only a value 0 is encoded or decoded by entropy as the significant coefficient indicator transformation unit cbf, and the transformation coefficient level information is not encoded or decoded by entropy. [000197] In operation 1520, if a significant coefficient exists in the current transformation unit, a SigMap significance map indicating a location of a significant coefficient is encoded or decoded by entropy. [000198] A SigMap significance map can be formed from a significant bit and the predetermined information that indicates the location of a last coefficient of meaning. A significant bit indicates whether a transformation coefficient according to each digitization index is a significant coefficient or 0, and can be expressed by significant_coeff_flag [i]. As will be described below, a map of significance is defined in units of subsets having a predetermined dimension that is obtained by dividing the transformation unit. Consequently, significant_coeff_flag [i] indicates whether a transformation coefficient of a digitization index i-th among the transformation coefficients included in a subset included in the transformation unit is 0. [000199] According to conventional H.264, an indicator (End-Of-Block) that indicates whether each significant coefficient is the last significant coefficient is encoded or decoded by entropy separately. However, according to an embodiment of the present invention, the location information of the last significant coefficient itself is encoded or decoded by entropy. For example, if a location of the last significant coefficient is (x, y), where x and y are integers, and last_significant_coeff_x and last_significant_coeff_y which are the syntax elements indicating the coordinate values of (x, y) can be encoded or decoded by entropy. [000200] In operation 1530, the information on the level of transformation coefficient indicating a dimension of a transformation coefficient is encoded or decoded by entropy. According to conventional H.264 / AVC, the information on the level of transformation coefficient is expressed by coeff_abs_level_minusl which is a syntax element. According to the modalities of the present invention, such as information on the level of transformation coefficient, coeff_abs_level_greaterl_flag which is a syntax element about whether an absolute value of a transformation coefficient is greater than 1, coef f_abs_level_greater2_f lag which is an element of syntax about whether an absolute value of a transformation coefficient is greater than 2, and coeff_abs_level_remaining which indicates the dimension information of the remaining transformation coefficient are encoded. [000201] The syntax element coeff_abs_level_remaining indicating the dimension information of the remaining transformation coefficient has a difference in the range between a dimension of a transformation coefficient (absCoeff) and a BaseLevel base level value, which is determined using coeff_abs_level_greaterl_flag and coeff_abs_level_greater2 . The baseLevel base level value is determined according to the equation: baseLevel = l + coeff_abs_level_greatherl_flag + coeff_abs_level_greather2_flag and coeff_abs_level_remaining is determined according to the equation: coeff_abs_level_remaining = baseCoe. While coeff_abs_level_greaterl_flag and coeff_abs_level_greater2_flag have a value of 0 or 1, the BaseLevel base level value can have a value from 1 to 3. Thus, coeff abs level_remaining can be varied from (absCoeff-1) to (absCoeff-3). As described above, (absCoeff-BaseLevel), which is a difference between the size of an original transformation coefficient absCoeff and the BaseLevel base level value, is transmitted as dimension information of a transformation coefficient in order to reduce the the size of the transmitted data. [000202] An operation of determining a context model for entropy coding of a significant transformation unit coefficient indicator, according to an embodiment of the present invention, will now be described. [000203] FIG. 16 is a diagram illustrating a coding unit and transformation units 1611 to 1617 included in the coding unit, according to an embodiment of the present invention. In FIG. 16, a data unit indicated by a dashed line denotes the coding unit, and the data units indicated by solid lines indicate transformation units 1611 to 1617. [000204] As described above, the video encoding apparatus 100 and the video decoding apparatus 200 perform encoding and decoding by dividing a larger encoding unit into encoding units having a size equal to or less than the dimension of the larger encoding unit. A prediction unit and a transformation unit used in a prediction operation and a transformation operation can be determined based on costs independently of other data units. If the size of a coding unit is larger than the size of a larger processing unit usable by the video coding apparatus 100 and the video decoding apparatus 200, the coding unit may be divided into processing units with a dimension equal to or less than the size of the largest transformation unit, and a transformation operation can be performed based on the division transformation units. For example, if the size of a coding unit is 64x64 and the size of a larger usable transformation unit is 32 x 32, in order to transform (or inversely transform) the coding unit, the coding unit is divided in processing units having a size equal to or less than 32 x 32. [000205] A transformation depth (trafodepth) indicating the number of times that the coding unit is divided in the horizontal and vertical directions in the transformation units can be determined. For example, if the dimension of a current coding unit is 2Nx2N and the dimension of the transformation unit is 2Nx2N, the transformation depth can be determined as 0. If the dimension of the transformation unit is NxN, the transformation depth can be determined as 1. Otherwise, if the size of the processing unit is N / 2 x N / 2, the depth of transformation can be determined as 2. [000206] With reference to FIG. 16, transformation units 1611, 1616, and 1617 are level 1 transformation units obtained by dividing a root coding unit once, and have a transformation depth of 1. Transformation units 1612, 1614, 1614, and 1615 are the level 2 transformation units obtained by dividing a level 1 transformation unit into four pieces, and have a transformation depth of 2. [000207] FIG. 17 is a diagram illustrating a context increase parameter ctxlnc used to determine a context model of a cbf transformation unit significant coefficient indicator for each of the transformation units 1611 to 1617 of FIG. 16, based on a transformation depth. In the tree structure of FIG. 17, leaf nodes 1711 to 1717, respectively, correspond to transformation units 1611 to 1617 of FIG. 16, and the values of 0 and 1 marked on leaf nodes 1711 to 1717 indicate the significant transformation unit coefficient cbf indicator of transformation units 1611 to 1617. In addition, in FIG. 17, the leaf nodes with the same transformation depth are illustrated in the order of the transformation units located on the upper left, upper right, lower left, and lower right. For example, leaf nodes 1712, 1713, 1714, and 1715 of FIG. 17 correspond, respectively, to transformation units 1612, 1613, 1614, and 1615 of FIG. 16. In addition, with reference to FIGS. 16 and 17, it is assumed that only the indicators of significant coefficient of transformation unit cbf of the processing units 1612 and 1614 are 1, and that the indicators of significant coefficient of transformation unit cbf of the other transformation units are 0. [000208] With reference to FIG. 17, since all transformation units 1611 to 1617 of FIG. 16 are obtained by dividing a unit of root encoding and therefore have non-zero transformation depths, the ctxlnc context increase parameter used to determine a context model of the cbf transformation unit significant coefficient indicator for each one of the processing units 1611 to 1617 is set to have a value of 0. [000209] FIG. 18 is a diagram illustrating an encoding unit 1811 and a transforming unit 1812 included in the encoding unit 1811, according to another embodiment of the present invention. In FIG. 18, a data unit indicated by a dashed line indicates the encoding unit 1811, and a data unit indicated by a solid line indicates the processing unit 1812. [000210] With reference to FIG. 18, if the size of the coding unit 1811 is equal to the size of the processing unit 1812 used to transform the coding unit 1811, a transformation depth (trafodepth) of the processing unit 1812 has a value of 0. If the unit of transformation 1812 has a transformation depth of 0, a context increase parameter ctxlnc used to determine a context model of a significant transformation unit coefficient indicator cbf of transformation unit 1812 is set to have a value of 1. [000211] The context modeler 1420 of FIG. 14 can compare the size of a coding unit to the size of a transformation unit based on a transformation depth of the transformation unit, can distinguish a case where the transformation depth of the transformation unit is 0 from a in which case the transformation depth of the transformation unit is not 0, and thus can change the ctxlnc context increase parameter used to determine a context model for entropy coding of the cbf transformation unit significant coefficient indicator . By changing the context increase parameter ctxlnc used to determine a context model, the context model for entropy coding of the cbf transformation unit significant coefficient indicator can be changed in a case where the transformation depth of the transformation unit transformation is 0 and a case where the transformation depth of the transformation unit is not 0. [000212] FIG. 19 is a diagram illustrating split_transform_flag split transformation indicators used to determine the structure of the transformation units included in the coding unit of FIG. 16, according to an embodiment of the present invention. [000213] The video encoding apparatus 100 can signal information about the structure of the transformation units used to transform each encoding unit, to the video decoding apparatus 200. Information on the structure of the transformation units can be signaled by middle of the split_transform_flag split transformation indicator that indicates whether each coding unit is divided into horizontal and vertical directions into four transformation units. [000214] Referring to FIGS. 16 and 19, since a root coding unit is divided into four pieces, a split transformation transform indicator 1910 of the root coding unit is set to 1. If the size of the root coding unit is greater than As the size of a larger usable transformation unit, the split coding transformation split indicator 1910 of the root coding unit can always be set to 1 and cannot be flagged. This is because, if the size of a coding unit is larger than the size of a larger usable processing unit, the coding unit does not need to be divided into deeper coding units that have a size equal to or less than the size of at least one larger processing plant. [000215] With respect to each of the four divisions of the transformation units of the root coding unit that had a transformation depth 1, a division transformation indicator that indicates whether the division of each of the four transformation units into four transformation units having a transformation depth of 2 is defined. In FIG. 19, transformation indicators of division of transformation units having the same transformation depth are illustrated in the order of the transformation units located on the upper left, upper right, lower left, and lower right. A reference number 1911 denotes a split transformation indicator for transformation unit 1611 of FIG. 16. Since the transformation unit 1611 is not divided into transformation units having a lesser depth, the split transformation indicator 1911 of the transformation unit 1611 has a value of 0. Similarly, since the units of transformation 1616 and 1617 of FIG. 16 are not divided into transformation units having a lesser depth, the transformation indicators of division 1913 and 1914 of transformation units 1616 and 1617 have a value of 0. Since the transformation unit in the upper right having a depth of transformation 1 in FIG. 16 is divided into transformation units 1612, 1613, 1614, and 1615 having a transformation depth of 2, a transformation indicator of division 1912 of the upper right transformation unit has a transformation depth of 1. Since the transformation units 1612, 1613, 1614, and 1615, with a transformation depth of 2 are not divided into transformation units with a smaller depth, transformation indicators of division 1915, 1916, 1917 and 1918 of transformation units 1612, 1613 , 1614 and 1615 having a transformation depth of 2 have a value of 0. [000216] As described above, a context model for entropy coding of a cbf transformation unit significant coefficient indicator can be determined based on a transformation depth of a transformation unit, and binary arithmetic coding can be performed in the significant transformation unit coefficient indicator based on the context of the selected model. If the cbf transformation unit significant coefficient indicator is 0, since only transformation coefficients 0 exist in a current transformation unit, only a value of 0 is encoded or decoded by entropy as the significant unit coefficient indicator of transformation cbf, and the transformation coefficient level information is not encoded or decoded by entropy. [000217] An entropy coding operation of a syntax element related to transformation coefficients included in a transformation unit of which a cbf transformation unit significant coefficient indicator has a value of 1, that is, a transformation unit having a non-zero transformation coefficient, it will now be described. [000218] FIG. 20 illustrates a transformation unit 2000 that is encoded by entropy according to an embodiment of the present invention. Although the transformation unit 2000 having a size of 16 x 16 is illustrated in FIG. 20, the size of the transformation unit 2000 is not limited to the illustrated dimension of 16 x 16, but can also be of several dimensions ranging from 4x4 to 32x32. [000219] With reference to FIG. 20, for entropy encoding and decoding of the transformation coefficient included in transformation unit 2000, transformation unit 2000 can be divided into transformation units for minors. An entropy coding operation of a syntax element related to a 4x4 2010 transformation unit included in the 2000 transformation unit will now be described. This entropy coding operation can also be applied to a transformation unit of different dimensions. [000220] The transformation coefficients included in the 2010 4x4 transformation unit each have a transformation coefficient (absCoeff) as illustrated in FIG. 20. The transformation coefficients included in the 4x4 transformation unit 2010 can be serialized according to a predetermined digitization order, as illustrated in FIG. 20 and processed sequentially. However, the scanning order is not limited, as illustrated, but can also be modified. [000221] Examples of syntax elements related to the transformation coefficients included in the 2010 4x4 transformation unit are signif icant_coeff_f lag which is a syntax element indicating whether each transformation coefficient included in a transformation unit is a significant coefficient that has a value that is not 0, which is a coeff_abs_level_greaterl_flag which is a syntax element indicating whether an absolute value of the transformation coefficient is greater than 1, coeff_abs_level_greater2_flag which is a syntax element indicating whether the absolute value is greater than 2 , and coeff_abs_level_remaining which is a syntax element indicating the dimension information for the remaining transformation coefficients. [000222] FIG. 21 illustrates a SigMap 2100 significance map corresponding to the 2010 transformation unit of FIG. 20. [000223] Referring to FIGS. 20 and 21, the SigMap 2100 significance map having a value of 1 for each of the significant coefficients having a value that is not 0, among the transformation coefficients included in the 2010 4x4 transformation unit of FIG. 20, is defined. The SigMap 2100 significance map is encoded or decoded by entropy using a previously defined context model. [000224] FIG. 22 illustrates coeff_abs_level_greaterl_flag 2200 corresponding to the 2010 4x4 transformation unit of FIG. 20. [000225] Referring to FIGS. 20 to 22, the coeff_abs_level_greaterl_flag 2200 which is an indicator that indicates whether a corresponding transformation coefficient of significance has a value greater than 1, over significant coefficients for which the SigMap 2100 significance map has a value of 1, is defined. When coeff_abs_level_greaterl_flag 2200 is 1, it indicates that a corresponding transformation coefficient is a transformation coefficient having a value greater than 1, and when coeff_abs_level_greaterl_flag 2200 is 0, this indicates that a corresponding transformation coefficient is a transformation coefficient with a value of 1. In FIG. 22, when coeff_abs_level_greaterl_flag 2210 is in a location of a transformation coefficient having a value of 1, coeff_abs_level_greaterl_flag 2210 has a value of 0. [000226] FIG. 23 illustrates coeff_abs_level_greater2_flag 2300 corresponding to the 2010 4x4 transformation unit of FIG. 20. [000227] Referring to FIGS. 20 to 23, coeff_abs_level_greater2_flag 2300 indicating whether a corresponding transformation coefficient has a value greater than 2, in relation to the transformation coefficients for which coeff_abs_level_greaterl_flag 2200 is defined as 1, is defined. When coeff_abs_level_greater2_flag 2300 is 1, this indicates that a corresponding transformation coefficient is a transformation coefficient having a value greater than 2, and when coeff_abs_level_greater2_flag 2300 is 0, this indicates that a corresponding transformation coefficient is a transformation coefficient with a value of 2. In FIG. 23, when coeff_abs_level_greater2_flag 2310 is in a transformation coefficient location with a value of 2, coeff_abs_level_greater2_flag 2310 has a value of 0. [000228] FIG. 24 illustrates the coeff_abs_level_remaining 2400 corresponding to the 2010 4x4 transformation unit of FIG. 20. [000229] Referring to FIGS. 20 to 24, coeff_abs_level_remaining 2400, which is a syntax element indicating dimension information of the remaining transformation coefficients can be obtained by calculating (absCoeff-BaseLevel) of each transformation coefficient. [000230] The coeff_abs_level_remaining 2400 which is the syntax element indicating the dimension information of the remaining transformation coefficients has a difference in a range between the dimension of the transformation coefficient (absCoeff) and a BaseLevel base level value determined using coeff_abs_level_greaterl_flag e coeff_abs_level_greater2_flag. The BaseLevel base level value is determined according to the equation: baseLevel = l + coeff_abs_level_greatherl flag + coeff_abs__level_greather2_flag and coeff_abs_level remaining is determined according to the equation: coeff_abs_level_remaining = absCoeff-baseLevel. [000231] The coeff_abs_level_remaining 2400 can be read and encoded by entropies according to the illustrated scanning order. [000232] FIG. 25 is a flow chart of a video entropy encoding method, according to an embodiment of the present invention. [000233] Referring to FIGS. 14 and 25, in operation 2510, the context modeler 1420 obtains data from a transformed coding unit based on a transformation unit. In operation 2520, context modeler 1420 determines a context model to arithmetically encode a significant transformation unit coefficient indicator that indicates whether a non-zero transformation coefficient exists in the transformation unit, based on a transformation depth of the transformation unit. processing unit. [000234] The context modeler 1420 can determine different context models, in a case where the dimension of the transformation unit is equal to the dimension of the coding unit, that is, when the transformation depth of the transformation unit is 0, and a case where the size of the transformation unit is smaller than the size of the coding unit, that is, when the transformation depth of the transformation unit is not 0. In more detail, the context modeler 1420 can change a context increase parameter ctxlnc to determine a context model, based on the transformation depth of the transformation unit, can distinguish a case where the transformation depth of the transformation unit is 0 from a transformation case where the depth of transformation transformation unit is not 0, and thus can change a context index ctxldx to determine a context model for entropy coding of a coefficient indicator and significant transformation unit. [000235] The significant coefficient indicator of the transformation unit can be defined separately according to the luminance and chrominance components. A context model for entropy coding of a significant transformation unit coefficient indicator cbf_luma of the luminance component transformation unit can be determined using the changed ctxlnc context increase parameter depending on whether the transformation depth of the transformation unit is 0. A context model for entropy coding of a chrominance component transformation unit significant coefficient indicator cbf_cb or cbf_cr can be determined using a transformation depth value (trafodepth) as the increase parameter of ctxlnc context. [000236] In operation 2530, the regular coding mechanism 1432 codes the transformation unit significant coefficient indicator arithmetically based on the determined context model. [000237] FIG. 26 is a block diagram of an entropy decoding apparatus 2600 according to an embodiment of the present invention. The entropy decoding apparatus 2600 corresponds to the entropy decoder 220 of the video decoding apparatus 200 of FIG. 2. The entropy decoding apparatus 2600 performs a reverse operation of the entropy coding operation performed by the entropy coding apparatus 1400 described above. [000238] With reference to FIG. 26, the entropy decoding apparatus 2600 includes a context modeler 2610, a regular decoding mechanism 2620, a bypass decoding mechanism 2630, and an unwinder 2640. [000239] A syntax element encoded using bypass encoding is produced for the bypass decoder 2630 to be arithmetically decoded, and a syntax element encoded using normal encoding is arithmetically decoded by the regular decoder 2620. The regular decoder 2620 decodes arithmetically a binary value of a current syntax element based on a context model provided using the 2610 context modeler to thereby produce a bit string. [000240] As the context modeler described above 1420 of FIG. 14, the context modeler 2610 can select a context model to entropy decode a significant transformation unit coefficient cbf indicator, based on a transformation depth of a transformation unit. That is, the context modeler 2610 can determine different context models, in a case where the size of the transformation unit is equal to the size of a coding unit, that is, when the transformation depth of the transformation unit is 0 , and a case where the size of the transformation unit is smaller than the size of the coding unit, that is, when the transformation depth of the transformation unit is not 0. In more detail, the 2610 context modeler can change a context increase parameter ctxlnc to determine a context model, based on the transformation depth of the transformation unit, can distinguish a case where the transformation depth of the transformation unit is 0 from a case where the depth transformation value of the transformation unit is not 0, and thus can change a context index ctxldx for the determination of a context model for the entropy decoding of the information significant coefficient indicator of cbf transformation unit. [000241] If the structure of the transformation units included in a coding unit is determined based on a split transformation transform indicator split_transform_flag and if a coding unit obtained from a bit stream is divided into transformation units, the Transformation depth of the transformation unit can be determined based on the number of times the encoding unit is divided in order to reach the transformation unit. [000242] The cbf transformation unit significant coefficient indicator can be defined separately according to the luminance and chrominance components. A context model for entropy decoding a cbf_luma transformation unit significant coefficient indicator of the luminance component transformation unit can be determined using the context increase parameter ctxlnc which changes depending on whether the transformation depth of the transformation unit is 0. A context model for entropy decoding of a chrominance component transformation unit significant coefficient indicator cbf_cb or cbf_cr can be determined using a transformation depth value (trafodepth) as the parameter for increasing ctxlnc context. [000243] The desinator 2640 reconstructs the bit strings that are arithmetically decoded by the regular decoding mechanism 2620 or bypass decoding mechanism 2630, for the syntax elements again. [000244] The entropy decoding device 2600 arithmetically decodes the syntax elements related to the transformation units, such as coeff_abs_level_remaing, SigMap, coeff abs level greaterl_flag, and coeff_abs_level_greater2_flag in addition to the cf transformation unit significant coefficient indicator, produces the same. When the syntax elements related to a transformation unit are reconstructed, the data included in the transformation units can be decoded using reverse quantization, inverse transformation and predictive decoding, based on the reconstructed syntax elements [000245] FIG. 27 is a flow chart of a video entropy decoding method, according to an embodiment of the present invention. [000246] With reference to FIG. 27, in operation 2710, a transformation unit included in a coding unit and used to inversely transform the coding unit is determined. As described above, the structure of the transformation units included in a coding unit can be determined based on a split_transform_flag split transformation indicator that indicates whether the coding unit obtained from a bit stream is divided into transformation units . In addition, a transformation depth of the transformation unit can be determined based on the number of times the coding unit is divided to reach the transformation unit. [000247] In operation 2720, context modeler 2610 obtains a significant transformation unit coefficient indicator that indicates whether a non-zero transformation coefficient exists in the transformation unit, from the bit stream. [000248] In operation 2730, context modeler 2610 determines a context model for arithmetically decoding the transformation unit's significant coefficient indicator, based on the transformation depth of the transformation unit. As described above, context modeler 2610 can determine different context models in a case where the size of the transformation unit is equal to the dimension of the coding unit, that is, when the transformation depth of the transformation unit is 0, and a case where the size of the transformation unit is smaller than the size of the coding unit, that is, when the transformation depth of the transformation unit is not 0. In more detail, the context modeler 2 610 can change a ctxlnc context increase parameter for determining a context model, based on the transformation depth of the transformation unit, can distinguish a case where the transformation depth of the transformation unit is 0 from a case where the transformation depth of the transformation unit is not 0 and thus can change a context index ctxldx to determine a context model for entropy decoding ia of the significant coefficient indicator of the processing unit. [000249] In operation 2740, the regular decoding mechanism 2620 arithmetically decodes the significant transformation unit coefficient indicator based on the context model provided from the context modeler 2610. [000250] The previous embodiments of the present invention can also be incorporated as computer-readable code in a computer-readable recording medium. The computer-readable recording medium is any data storage device that can store data that can be read later by a computer system. Examples of computer-readable recording media include read-only memory (ROM), random access memory (RAM), CD-ROM, magnetic tapes, floppy disks and optical data storage devices. The computer-readable recording medium can also be distributed by computer systems coupled to the network so that the computer-readable code is stored and executed in a distributed manner. [000251] Although the present invention has been presented and described particularly with reference to the exemplary modalities thereof, it will be understood by one skilled in the art that various changes in shape and details can be made therein without distancing the spirit and scope of the present invention as defined by the following claims.
权利要求:
Claims (1) [0001] 1. METHOD OF DECODING A VIDEO, the method characterized by understanding: obtaining a division transformation indicator for a current depth from a bit stream; when the split transformation indicator indicates a non-split for the current depth, determining that a transformation depth is equal to the current depth; determining a context increase parameter to determine a context index based on whether the transformation depth is equal to a predetermined value without using a size of a transformation unit; obtain a context model for a significant transformation unit coefficient indicator among a plurality of context models using the context index obtained by adding the context increase parameter and a context deviation; arithmetically decode the significant transformation unit coefficient indicator based on the context model; and determining whether at least one non-zero transformation coefficient exists in the transformation unit of the transformation depth based on the significant transformation unit indicator; where when the split transformation indicator indicates a split for the current depth, a transformation unit for the current depth is divided into one or more transformation units for the next depth and a split transformation indicator for the next depth is obtained from the bit stream, in which the context model includes information to determine a most likely symbol (MPS).
类似技术:
公开号 | 公开日 | 专利标题 BR112014033040B1|2020-09-29|METHOD OF DECODING A VIDEO BR122019014179B1|2021-03-09|video decoding method BR112013017395B1|2020-10-06|VIDEO DECODER METHOD, AND VIDEO ENCODER METHOD BR122015013881A2|2020-09-01|METHOD FOR DECODING AN ENCODED VIDEO ES2816059T3|2021-03-31|Procedure and apparatus for video coding, corresponding procedure for video decoding, video accompanied by arithmetic decoding and signaling of significant coefficient in two dimensions BR122013019015B1|2021-08-24|APPARATUS TO DECODE A VIDEO ES2776899T3|2020-08-03|Video decoding apparatus that uses parameter updating for de-binarization of the entropy-encoded transformation coefficient, and encoding procedure that uses the same for binarization BR112014018115B1|2021-10-26|METHOD OF DECODING A VIDEO BR122020001936B1|2021-05-25|method of decoding a video BR112012025308B1|2022-02-08|METHOD OF ENCODING A VIDEO, METHOD OF DECODING AN ENCRYPTED VIDEO, VIDEO ENCODING EQUIPMENT INCLUDING A PROCESSOR, VIDEO DECODING EQUIPMENT INCLUDING A PROCESSOR, AND COMPUTER-LEABLE RECORDING MEDIA BR122021004836B1|2022-02-08|METHOD TO DECODE AN ENCODED VIDEO BR122021004622B1|2022-02-08|METHOD TO DECODE A VIDEO
同族专利:
公开号 | 公开日 TW201642654A|2016-12-01| KR101678574B1|2016-12-06| KR101678577B1|2016-11-22| PH12014502873B1|2015-02-23| AU2013285752B2|2016-02-11| US20150139297A1|2015-05-21| TW201414314A|2014-04-01| TWI555377B|2016-10-21| PL3361732T3|2020-03-31| RU2612624C2|2017-03-09| KR20150099481A|2015-08-31| MX2015000134A|2015-05-07| PL2869563T3|2018-08-31| DK3361733T3|2020-01-20| EP3361733B1|2019-12-25| AU2018200355A1|2018-02-08| DK3361735T3|2020-01-27| AU2018200355B2|2018-11-22| CN104811704B|2018-07-20| AU2016201791A1|2016-04-14| TWI655859B|2019-04-01| JP6247354B2|2017-12-13| PH12018500783A1|2018-10-15| HRP20180927T1|2018-07-27| PL3361733T3|2020-04-30| US20150117546A1|2015-04-30| IN2015MN00021A|2015-10-16| CN107911699B|2021-08-10| KR20140093201A|2014-07-25| ES2768086T3|2020-06-19| RU2642387C1|2018-01-24| KR20150056078A|2015-05-22| WO2014007524A1|2014-01-09| BR122020009714B1|2021-06-22| EP3361734A1|2018-08-15| DK3361734T3|2020-01-02| AU2017200680A1|2017-02-23| KR20150099483A|2015-08-31| HUE048718T2|2020-08-28| ES2770609T3|2020-07-02| EP2869563A1|2015-05-06| TWI615017B|2018-02-11| KR20150056079A|2015-05-22| EP3361735B1|2020-01-08| KR20180021371A|2018-03-02| RS57336B1|2018-08-31| PL3361735T3|2020-04-30| US9426496B2|2016-08-23| KR101678575B1|2016-11-22| HUE049140T2|2020-09-28| JP6014823B2|2016-10-26| AU2017200680B2|2017-10-26| EP2869563B1|2018-06-13| KR20150009615A|2015-01-26| TW201811038A|2018-03-16| AU2016201791B2|2016-11-17| US20150189291A1|2015-07-02| KR101678572B1|2016-11-22| KR101824057B1|2018-01-31| CA2876288C|2018-01-16| EP3361735A1|2018-08-15| EP3361733A1|2018-08-15| CN105007489A|2015-10-28| RU2015102988A|2016-08-20| CA2988167A1|2014-01-09| PH12018500784A1|2018-10-15| JP2016213895A|2016-12-15| KR101678573B1|2016-11-22| KR20150099482A|2015-08-31| CN104471934B|2018-01-23| CN107911699A|2018-04-13| PH12018500781A1|2018-10-15| KR101769429B1|2017-08-18| KR20140005101A|2014-01-14| SG10201709163WA|2017-12-28| TR201808497T4|2018-07-23| DK3361732T3|2020-01-02| CY1120791T1|2019-12-11| CN104811707B|2018-02-13| PT2869563T|2018-06-25| BR122018069200B1|2021-02-09| JP6510618B2|2019-05-08| AU2013285752A1|2015-01-22| EP2869563A4|2016-01-13| RU2673392C1|2018-11-26| US20150195584A1|2015-07-09| US9407936B2|2016-08-02| CN104796712B|2018-09-11| CN104811707A|2015-07-29| BR112014033040A2|2017-06-27| CA2876288A1|2014-01-09| EP3361732B1|2019-12-11| US9525891B2|2016-12-20| US9591333B2|2017-03-07| CN104796712A|2015-07-22| CA2988167C|2019-05-28| JP2015526014A|2015-09-07| PH12018500782A1|2018-10-15| MY166893A|2018-07-24| EP3361734B1|2019-12-11| US9516350B2|2016-12-06| CN104471934A|2015-03-25| DK2869563T3|2018-06-25| SI2869563T1|2018-08-31| HUE039986T2|2019-02-28| PL3361734T3|2020-03-31| PH12014502873A1|2015-02-23| CN104811708B|2018-02-02| SG10201704574YA|2017-07-28| KR101484283B1|2015-01-20| SG10201701381RA|2017-04-27| SG11201408336RA|2015-02-27| KR20160148485A|2016-12-26| KR20170106252A|2017-09-20| US20150195583A1|2015-07-09| KR101927969B1|2018-12-11| CN104811708A|2015-07-29| MX347473B|2017-04-27| KR20140122698A|2014-10-20| LT2869563T|2018-07-10| ES2673893T3|2018-06-26| EP3361732A1|2018-08-15| KR101678576B1|2016-12-06| AU2013285752A2|2015-02-12| ES2764988T3|2020-06-05| JP2018046573A|2018-03-22| CN104811704A|2015-07-29| ES2764989T3|2020-06-05|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JP2002290243A|2001-03-28|2002-10-04|Mitsubishi Electric Corp|Coding method, coder, decoding method, and decoder| JP2003319391A|2002-04-26|2003-11-07|Sony Corp|Encoding apparatus and method, decoding apparatus and method, recording medium, and program| US7599438B2|2003-09-07|2009-10-06|Microsoft Corporation|Motion vector block pattern coding and decoding| US7724827B2|2003-09-07|2010-05-25|Microsoft Corporation|Multi-layer run level encoding and decoding| JP2005123732A|2003-10-14|2005-05-12|Matsushita Electric Ind Co Ltd|Apparatus and method for deblocking filter processing| US20060008009A1|2004-07-09|2006-01-12|Nokia Corporation|Method and system for entropy coding for scalable video codec| US7664176B2|2004-07-09|2010-02-16|Nokia Corporation|Method and system for entropy decoding for scalable video bit stream| JP2007043651A|2005-07-05|2007-02-15|Ntt Docomo Inc|Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program| UA92368C2|2005-09-27|2010-10-25|Квелкомм Инкорпорейтед|Scalability method basing of on content information| EP2039168A2|2006-07-05|2009-03-25|Thomson Licensing|Methods and apparatus for multi-view video encoding and decoding| US8335261B2|2007-01-08|2012-12-18|Qualcomm Incorporated|Variable length coding techniques for coded block patterns| US8515194B2|2007-02-21|2013-08-20|Microsoft Corporation|Signaling and uses of windowing information for images| US8571104B2|2007-06-15|2013-10-29|Qualcomm, Incorporated|Adaptive coefficient scanning in video coding| US8335266B2|2007-06-29|2012-12-18|Cisco Technology, Inc.|Expedited splicing of video streams| US8483282B2|2007-10-12|2013-07-09|Qualcomm, Incorporated|Entropy coding of interleaved sub-blocks of a video block| JP4895396B2|2008-01-22|2012-03-14|キヤノン株式会社|Image decoding apparatus and image decoding method| US8619856B2|2008-10-03|2013-12-31|Qualcomm Incorporated|Video coding with large macroblocks| US8634456B2|2008-10-03|2014-01-21|Qualcomm Incorporated|Video coding with large macroblocks| US8503527B2|2008-10-03|2013-08-06|Qualcomm Incorporated|Video coding with large macroblocks| US8259801B2|2008-10-12|2012-09-04|Mediatek Inc.|Methods for coding digital media data with prediction information and prediction error information being respectively carried by different bit stream sections| US20100098156A1|2008-10-16|2010-04-22|Qualcomm Incorporated|Weighted prediction based on vectorized entropy coding| WO2010090484A2|2009-02-09|2010-08-12|삼성전자 주식회사|Video encoding method and apparatus using low-complexity frequency transformation, and video decoding method and apparatus| JP5133950B2|2009-07-16|2013-01-30|日本電信電話株式会社|Context adaptive entropy encoding method and apparatus, context adaptive entropy decoding method and apparatus, and program thereof| KR101474756B1|2009-08-13|2014-12-19|삼성전자주식회사|Method and apparatus for encoding and decoding image using large transform unit| JP6042943B2|2009-08-14|2016-12-14|サムスン エレクトロニクス カンパニー リミテッド|Video decoding method and video decoding device| KR101456498B1|2009-08-14|2014-10-31|삼성전자주식회사|Method and apparatus for video encoding considering scanning order of coding units with hierarchical structure, and method and apparatus for video decoding considering scanning order of coding units with hierarchical structure| KR101487686B1|2009-08-14|2015-01-30|삼성전자주식회사|Method and apparatus for video encoding, and method and apparatus for video decoding| CA2778368C|2009-10-20|2016-01-26|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using an iterative interval size reduction| KR101457418B1|2009-10-23|2014-11-04|삼성전자주식회사|Method and apparatus for video encoding and decoding dependent on hierarchical structure of coding unit| KR101457894B1|2009-10-28|2014-11-05|삼성전자주식회사|Method and apparatus for encoding image, and method and apparatus for decoding image| KR101703327B1|2010-01-14|2017-02-06|삼성전자 주식회사|Method and apparatus for video encoding using pattern information of hierarchical data unit, and method and apparatus for video decoding using pattern information of hierarchical data unit| KR101457396B1|2010-01-14|2014-11-03|삼성전자주식회사|Method and apparatus for video encoding using deblocking filtering, and method and apparatus for video decoding using the same| US9973768B2|2010-03-16|2018-05-15|Texas Instruments Incorporated|CABAC decoder with decoupled arithmetic decoding and inverse binarization| WO2011126272A2|2010-04-05|2011-10-13|Samsung Electronics Co., Ltd.|Method and apparatus for encoding video by using dynamic-range transformation, and method and apparatus for decoding video by using dynamic-range transformation| US8942282B2|2010-04-12|2015-01-27|Qualcomm Incorporated|Variable length coding of coded block pattern in video compression| CN101848393B|2010-06-08|2011-08-31|上海交通大学|Telescopic video sparse information processing system| DK2858366T3|2010-07-09|2017-02-13|Samsung Electronics Co Ltd|Method of decoding video using block merge| RS56577B1|2010-07-09|2018-02-28|Samsung Electronics Co Ltd|Method for entropy decoding transform coefficients| US9661338B2|2010-07-09|2017-05-23|Qualcomm Incorporated|Coding syntax elements for adaptive scans of transform coefficients for video coding| KR20120008314A|2010-07-16|2012-01-30|광운대학교 산학협력단|Video encoding and decoding apparatus and method using adaptive macro block size control and sub block depth control that based on a image characteristic and a context| JP5624218B2|2010-09-30|2014-11-12|サムスン エレクトロニクスカンパニー リミテッド|Video encoding method and apparatus for encoding hierarchically structured symbols, and video decoding method and apparatus for decoding hierarchically structured symbols| US9025661B2|2010-10-01|2015-05-05|Qualcomm Incorporated|Indicating intra-prediction mode selection for video coding| US9641846B2|2010-10-22|2017-05-02|Qualcomm Incorporated|Adaptive scanning of transform coefficients for video coding| US9172963B2|2010-11-01|2015-10-27|Qualcomm Incorporated|Joint coding of syntax elements for video coding| US8976861B2|2010-12-03|2015-03-10|Qualcomm Incorporated|Separately coding the position of a last significant coefficient of a video block in video coding| EP2663075B1|2011-01-06|2020-05-06|Samsung Electronics Co., Ltd|Encoding method and device of video using data unit of hierarchical structure, and decoding method and device thereof| US9414056B2|2011-01-13|2016-08-09|Samsung Electronics Co., Ltd.|Video-encoding method and apparatus for same and video-decoding method and apparatus for same using a selective scan mode| US9256960B2|2011-02-23|2016-02-09|Panasonic Intellectual Property Corporation Of America|Image coding method and image decoding method| WO2012134246A2|2011-04-01|2012-10-04|엘지전자 주식회사|Entropy decoding method, and decoding apparatus using same| CN102215404B|2011-05-20|2013-07-10|昌信科技有限公司|Decoding method and system of videos inside embedded system| EP3057326A1|2011-06-10|2016-08-17|MediaTek, Inc|Method and apparatus of scalable video coding| US20130003858A1|2011-06-30|2013-01-03|Vivienne Sze|Simplified Context Selection For Entropy Coding of Transform Coefficient Syntax Elements| MY164252A|2011-07-01|2017-11-30|Samsung Electronics Co Ltd|Method and apparatus for entropy encoding using hierarchical data unit, and method and apparatus for decoding| CN109120927B|2011-11-04|2021-05-18|夏普株式会社|Image decoding device, image decoding method, and image encoding device| US9288508B2|2011-11-08|2016-03-15|Qualcomm Incorporated|Context reduction for context adaptive binary arithmetic coding| RU2628130C2|2011-12-28|2017-08-15|Шарп Кабусики Кайся|Arithmetic decoding device, image decoding device and arithmetic coding device| US9185405B2|2012-03-23|2015-11-10|Qualcomm Incorporated|Coded block flag inference in video coding| DK3361732T3|2012-07-02|2020-01-02|Samsung Electronics Co Ltd|Entropy coding of a video and entropy decoding of a video|KR101703327B1|2010-01-14|2017-02-06|삼성전자 주식회사|Method and apparatus for video encoding using pattern information of hierarchical data unit, and method and apparatus for video decoding using pattern information of hierarchical data unit| US10091529B2|2010-07-09|2018-10-02|Samsung Electronics Co., Ltd.|Method and apparatus for entropy encoding/decoding a transform coefficient| DK3361732T3|2012-07-02|2020-01-02|Samsung Electronics Co Ltd|Entropy coding of a video and entropy decoding of a video| JP6365540B2|2013-07-17|2018-08-01|ソニー株式会社|Image processing apparatus and method| JP6066945B2|2014-02-21|2017-01-25|キヤノン株式会社|Image decoding apparatus, image decoding method and program| CN107534772B|2015-05-19|2020-05-19|联发科技股份有限公司|Entropy coding and decoding method and device for image or video data| WO2016183814A1|2015-05-20|2016-11-24|Mediatek Singapore Pte. Ltd.|Coded block flag coding using cross-component correlation| CN105141957B|2015-07-31|2019-03-15|广东中星电子有限公司|The method and apparatus of image and video data encoding and decoding| EP3293975A4|2015-09-08|2018-10-10|Samsung Electronics Co., Ltd.|Device and method for entropy encoding and decoding| EP3306930A4|2015-09-10|2018-05-02|Samsung Electronics Co., Ltd.|Encoding device, decoding device, and encoding and decoding method thereof| US10362310B2|2015-10-21|2019-07-23|Qualcomm Incorporated|Entropy coding techniques for display stream compressionof non-4:4:4 chroma sub-sampling| CN113810714A|2016-04-29|2021-12-17|世宗大学校产学协力团|Method and apparatus for encoding and decoding image signal| WO2017188739A1|2016-04-29|2017-11-02|세종대학교 산학협력단|Method and device for encoding and decoding image signal| KR102287305B1|2017-01-03|2021-08-06|엘지전자 주식회사|Method and apparatus for encoding/decoding video signals using quadratic transformation| KR20180098159A|2017-02-24|2018-09-03|주식회사 케이티|Method and apparatus for processing a video signal| WO2018195431A1|2017-04-21|2018-10-25|Zenimax Media Inc.|Systems and methods for deferred post-processes in video encoding| US10560723B2|2017-05-08|2020-02-11|Qualcomm Incorporated|Context modeling for transform coefficient coding| WO2019045538A1|2017-09-04|2019-03-07|삼성전자 주식회사|Encoding method and apparatus therefor, and decoding method and apparatus therefor| CN111602397A|2018-01-17|2020-08-28|英迪股份有限公司|Video coding method and apparatus using various transform techniques| CN111435993A|2019-01-14|2020-07-21|华为技术有限公司|Video encoder, video decoder and corresponding methods| EP3939286A1|2019-03-11|2022-01-19|Beijing Dajia Internet Information Technology Co., Ltd.|Coding of transform coefficients in video coding| WO2020189971A1|2019-03-15|2020-09-24|엘지전자 주식회사|Image decoding method and device using transform skip flag in image coding system| CN112449188B|2019-08-28|2021-08-20|腾讯科技(深圳)有限公司|Video decoding method, video encoding device, video encoding medium, and electronic apparatus| WO2021194199A1|2020-03-24|2021-09-30|엘지전자 주식회사|Transform-based image coding method and device therefor| WO2021194221A1|2020-03-26|2021-09-30|엘지전자 주식회사|Transform-based image coding method and device therefor|
法律状态:
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H04N 7/00 (2011.01) | 2020-03-10| B06A| Notification to applicant to reply to the report for non-patentability or inadequacy of the application [chapter 6.1 patent gazette]| 2020-08-11| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2020-09-29| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 02/07/2013, OBSERVADAS AS CONDICOES LEGAIS. | 2020-11-17| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04N 7/00 Ipc: H04N 19/119 (2014.01), H04N 19/13 (2014.01), H04N |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201261667117P| true| 2012-07-02|2012-07-02| US61/667,117|2012-07-02| PCT/KR2013/005870|WO2014007524A1|2012-07-02|2013-07-02|Method and apparatus for entropy coding video and method and apparatus for entropy decoding video|BR122018069200-6A| BR122018069200B1|2012-07-02|2013-07-02|video decoding device| BR122020009714-0A| BR122020009714B1|2012-07-02|2013-07-02|METHOD OF DECODING A VIDEO, APPARATUS OF ENCODING A VIDEO, AND NON-TRANSIVE COMPUTER-READABLE MEANS FOR STORING DATA ASSOCIATED WITH A VIDEO| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|